0% found this document useful (0 votes)
478 views356 pages

JND DC 15.a R SG 1of2 PDF

Jnd Dc 15.a r Sg 1of2.PDF

Uploaded by

john fallon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
478 views356 pages

JND DC 15.a R SG 1of2 PDF

Jnd Dc 15.a r Sg 1of2.PDF

Uploaded by

john fallon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 356

Juniper Networks Design—

Data Center
15.a

Student Guide
Volume 1 of 2

Worldwide Education Services

1133 Innovation Way


Sunnyvale, CA 94089
USA
408-745-2000
www.juniper.net

Course Number: EDU-JUN-JND-DC


This document is produced by Juniper Networks, Inc.
This document or any part thereof may not be reproduced or transmitted in any form under penalty of law, without the prior written permission of Juniper Networks Education
Services.
Juniper Networks, Junos, Steel-Belted Radius, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. The
Juniper Networks Logo, the Junos logo, and JunosE are trademarks of Juniper Networks, Inc. All other trademarks, service marks, registered trademarks, or registered service
marks are the property of their respective owners.
Juniper Networks Design—Data Center Student Guide, Revision 15.a
Copyright © 2015 Juniper Networks, Inc. All rights reserved.
Printed in USA.
Revision History:
Revision 15.a—September 2015.
The information in this document is current as of the date listed above.
The information in this document has been carefully verified and is believed to be accurate. Juniper Networks assumes no responsibilities for any inaccuracies that may
appear in this document. In no event will Juniper Networks be liable for direct, indirect, special, exemplary, incidental, or consequential damages resulting from any defect or
omission in this document, even if advised of the possibility of such damages.

Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.
YEAR 2000 NOTICE
Juniper Networks hardware and software products do not suffer from Year 2000 problems and hence are Year 2000 compliant. The Junos operating system has no known
time-related limitations through the year 2038. However, the NTP application is known to have some difficulty in the year 2036.
SOFTWARE LICENSE
The terms and conditions for using Juniper Networks software are described in the software license provided with the software, or to the extent applicable, in an agreement
executed between you and Juniper Networks, or Juniper Networks agent. By using Juniper Networks software, you indicate that you understand and agree to be bound by its
license terms and conditions. Generally speaking, the software license restricts the manner in which you are permitted to use the Juniper Networks software, may contain
prohibitions against certain uses, and may state conditions under which the license is automatically terminated. You should consult the software license for further details.
Contents

Chapter 1: Course Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1

Chapter 2: Overview of Data Center Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1


Initial Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Architectures and Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8
Connecting Data Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-16
Security and Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-20
Implementation Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-30

Chapter 3: Initial Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1


Physical Layout and Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
Environmental Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9
Cabling Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-16
Data Center Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-22

Chapter 4: Traditional Data Center Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1


Traditional Multitier Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Link Aggregation and Redundant Trunk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Multichassis Link Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-20
Lab: Designing a Multitier Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-35

Chapter 5: Ethernet Fabric Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1


Ethernet Fabric Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
Virtual Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-7
Virtual Chassis Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-29
QFabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-60
Junos Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-82
Lab: Ethernet Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-107

Chapter 6: IP Fabric Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1


The Shift to IP Fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
IP Fabric Routing Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-7
IP Fabric Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-23
VXLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-29
Lab: IP Fabric Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-49

Chapter 7: Data Center Interconnect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1


DCI Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Layer 2 DCI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-25
EVPN Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-37
Layer 3 DCI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-48
Lab: Data Center Interconnect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-54

Acronym List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ACR-1

www.juniper.net Contents • iii


iv • Contents www.juniper.net
Course Overview

This five-day course is designed to cover best practices, theory, and design principles for data center design including
data center architectures, data center interconnects, security considerations, virtualization, and data center operations.
Objectives
After successfully completing this course, you should be able to:
• State high-level concepts about the different data center architectures.
• Identify features used to interconnect data centers.
• Identify key high-level considerations about securing and monitoring a data center deployment.
• Outline key high-level concepts when implementing different data center approaches.
• Recommend data center cooling designs and considerations.
• Explain device placement and cabling requirements.
• Outline different data center use cases with basic architectures.
• Describe a traditional multitier data center architecture.
• Explain link aggregation and redundant trunk groups.
• Explain multichassis link aggregation.
• Summarize and discuss key concepts and components of a Virtual Chassis.
• Summarize and discuss key concepts and components of a VCF.
• Summarize and discuss key concepts and components of a QFabric System.
• Summarize and discuss key concepts and components of Junos Fusion.
• List the reasons for the shift to IP fabrics.
• Summarize how to scale an IP fabric.
• State the design considerations of a VXLAN overlay.
• Define the term Data Center Interconnect.
• List differences between the different Layer 2 and Layer 3 DCIs.
• Summarize and discuss the benefits and use cases for EVPN.
• Discuss the security requirements and design principles of the data center.
• Identify the security elements of the data center.
• Explain how to simplify security in the data center.
• Discuss the security enforcement layers in the data center.
• Summarize and discuss the purpose of SDN.
• Explain the function of Contrail.
• Summarize and discuss the purpose of NFV.
• Discuss the purpose and function of vSRX and vMX.
• Discuss the importance of understanding the baseline behaviors in your data center.
• List the characteristics of the Junos Space Network Management Platform and describe its deployment
options.
• Describe the importance of analytics.
• Discuss automation in the data center.
• Discuss the benefits of QoS and CoS.
• State the benefits of a converged network.

www.juniper.net Course Overview • v


• Identify general aspects of data center migration.
• Summarize and discuss best practices for migration planning.
• Outline some common migration scenarios.
• Summarize high availability design considerations in the data center.
• Provide an overview of high availability offerings and solutions in the data center.
Intended Audience
This course is targeted specifically for those who have a solid understanding of operation and configuration, and
are looking to enhance their skill sets by learning the principles of design for the data center.

Course Level
JND-DC is an intermediate-level course.
Prerequisites
The prerequisites for this course are as follows:
• Knowledge of routing and switching architectures and protocols.
• Knowledge of Juniper Networks products and solutions.
• Understanding of infrastructure security principles.
• Basic knowledge of hypervisors and load balancers.
• Completion of the Juniper Networks Design Fundamentals (JNDF) course.

vi • Course Overview www.juniper.net


Course Agenda

Day 1
Chapter 1: Course Introduction
Chapter 2: Overview of Data Center Design
Chapter 3: Initial Design Considerations
Chapter 4: Traditional Data Center Architecture
Lab: Designing a Multitier Architecture
Day 2
Chapter 5: Ethernet Fabric Architectures
Lab: Ethernet Fabric Architecture
Day 3
Chapter 6: IP Fabric Architecture
Lab: IP Fabric Architecture
Chapter 7: Data Center Interconnect
Lab: Interconnecting Data Centers
Day 4
Chapter 8: Securing the Data Center
Lab: Securing the Data Center
Chapter 9: SDN and Virtualization in the Data Center
Lab: SDN and Virtualization
Chapter 10: Data Center Operation
Lab: Data Center Operations
Day 5
Chapter 11: Traffic Prioritization for Converged Networks
Lab: Prioritizing Data in the Data Center
Chapter 12: Migration Strategies
Lab: Data Center Migration
Chapter 13: High Availability

www.juniper.net Course Agenda • vii


Document Conventions

CLI and GUI Text


Frequently throughout this course, we refer to text that appears in a command-line interface (CLI) or a graphical user
interface (GUI). To make the language of these documents easier to read, we distinguish GUI and CLI text from standard
text according to the following table.

Style Description Usage Example

Franklin Gothic Normal text. Most of what you read in the Lab Guide and
Student Guide.

Courier New Console text:


commit complete
• Screen captures
• Noncommand-related syntax Exiting configuration mode
GUI text elements:
• Menu names Select File > Open, and then click
Configuration.conf in the Filename 
• Text field entry
text box.

Input Text Versus Output Text


You will also frequently see cases where you must enter input text yourself. Often these instances will be shown in the
context of where you must enter them. We use bold style to distinguish text that is input versus text that is simply
displayed.

Style Description Usage Example

Normal CLI No distinguishing variant. Physical interface:fxp0, Enabled


Normal GUI View configuration history by clicking
Configuration > History.

CLI Input Text that you must enter. lab@San_Jose> show route
GUI Input Select File > Save, and type config.ini
in the Filename field.

Defined and Undefined Syntax Variables


Finally, this course distinguishes between regular text and syntax variables, and it also distinguishes between syntax
variables where the value is already assigned (defined variables) and syntax variables where you must assign the value
(undefined variables). Note that these styles can be combined with the input style as well.

Style Description Usage Example

CLI Variable Text where variable value is already policy my-peers


assigned.
GUI Variable Click my-peers in the dialog.

CLI Undefined Text where the variable’s value is the Type set policy policy-name.
user’s discretion or text where the
ping 10.0.x.y
variable’s value as shown in the lab
GUI Undefined guide might differ from the value the Select File > Save, and type filename in
user must input according to the lab the Filename field.
topology.

viii • Document Conventions www.juniper.net


Additional Information

Education Services Offerings


You can obtain information on the latest Education Services offerings, course dates, and class locations from the World
Wide Web by pointing your Web browser to: http://www.juniper.net/training/education/.
About This Publication
The Juniper Networks Design—Data Center Student Guide is written and maintained by the Juniper Networks Education
Services development team. Please send questions and suggestions for improvement to training@juniper.net.
Technical Publications
You can print technical manuals and release notes directly from the Internet in a variety of formats:
• Go to http://www.juniper.net/techpubs/.
• Locate the specific software or hardware release and title you need, and choose the format in which you
want to view or print the document.
Documentation sets and CDs are available through your local Juniper Networks sales office or account representative.
Juniper Networks Support
For technical support, contact Juniper Networks at http://www.juniper.net/customers/support/, or at 1-888-314-JTAC
(within the United States) or 408-745-2121 (outside the United States).

www.juniper.net Additional Information • ix


x • Additional Information www.juniper.net
Juniper Networks Design—Data Center

Chapter 1: Course Introduction


Juniper Networks Design—Data Center

We Will Discuss:
• Objectives and course content information;
• Additional Juniper Networks, Inc. courses; and
• The Juniper Networks Certification Program.

Chapter 1–2 • Course Introduction www.juniper.net


Juniper Networks Design—Data Center

Introductions
The slide asks several questions for you to answer during class introductions.

www.juniper.net Course Introduction • Chapter 1–3


Juniper Networks Design—Data Center

Course Contents: Part 1


The slide lists the topics we discuss in this course.

Chapter 1–4 • Course Introduction www.juniper.net


Juniper Networks Design—Data Center

Course Contents: Part 2


The slide lists the remainder of the topics we discuss in this course.

www.juniper.net Course Introduction • Chapter 1–5


Juniper Networks Design—Data Center

Prerequisites
The slide lists the prerequisites for this course.

Chapter 1–6 • Course Introduction www.juniper.net


Juniper Networks Design—Data Center

General Course Administration


The slide documents general aspects of classroom administration.

www.juniper.net Course Introduction • Chapter 1–7


Juniper Networks Design—Data Center

Training and Study Materials


The slide describes Education Services materials that are available for reference both in the classroom and online.

Chapter 1–8 • Course Introduction www.juniper.net


Juniper Networks Design—Data Center

Additional Resources
The slide provides links to additional resources available to assist you in the installation, configuration, and operation of
Juniper Networks products.

www.juniper.net Course Introduction • Chapter 1–9


Juniper Networks Design—Data Center

Satisfaction Feedback
Juniper Networks uses an electronic survey system to collect and analyze your comments and feedback. Depending on the
class you are taking, please complete the survey at the end of the class, or be sure to look for an e-mail about two weeks
from class completion that directs you to complete an online survey form. (Be sure to provide us with your current e-mail
address.)
Submitting your feedback entitles you to a certificate of class completion. We thank you in advance for taking the time to
help us improve our educational offerings.

Chapter 1–10 • Course Introduction www.juniper.net


Juniper Networks Design—Data Center

Juniper Networks Education Services Curriculum


Juniper Networks Education Services can help ensure that you have the knowledge and skills to deploy and maintain
cost-effective, high-performance networks for both enterprise and service provider environments. We have expert training
staff with deep technical and industry knowledge, providing you with instructor-led hands-on courses in the classroom and
online, as well as convenient, self-paced eLearning courses. In addition to the courses shown on the slide, Education
Services offers training in automation, E-Series, firewall/VPN, IDP, network design, QFabric, support, and wireless LAN.

Courses
Juniper Networks courses are available in the following formats:
• Classroom-based instructor-led technical courses
• Online instructor-led technical courses
• Hardware installation eLearning courses as well as technical eLearning courses
• Learning bytes: Short, topic-specific, video-based lessons covering Juniper products and technologies
Find the latest Education Services offerings covering a wide range of platforms at 
http://www.juniper.net/training/technical_education/.

www.juniper.net Course Introduction • Chapter 1–11


Juniper Networks Design—Data Center

Juniper Networks Certification Program


A Juniper Networks certification is the benchmark of skills and competence on Juniper Networks technologies.

Chapter 1–12 • Course Introduction www.juniper.net


Juniper Networks Design—Data Center

Juniper Networks Certification Program Overview


The Juniper Networks Certification Program (JNCP) consists of platform-specific, multitiered tracks that enable participants
to demonstrate competence with Juniper Networks technology through a combination of written proficiency exams and
hands-on configuration and troubleshooting exams. Successful candidates demonstrate a thorough understanding of
Internet and security technologies and Juniper Networks platform configuration and troubleshooting skills.
The JNCP offers the following features:
• Multiple tracks;
• Multiple certification levels;
• Written proficiency exams; and
• Hands-on configuration and troubleshooting exams.
Each JNCP track has one to four certification levels—Associate-level, Specialist-level, Professional-level, and Expert-level. The
Associate-level, Specialist-level, and Professional-level exams are computer-based exams composed of multiple choice
questions administered at Pearson VUE testing centers worldwide.
Expert-level exams are composed of hands-on lab exercises administered at select Juniper Networks testing centers. Please
visit the JNCP website at http://www.juniper.net/certification for detailed exam information, exam pricing, and exam
registration.

www.juniper.net Course Introduction • Chapter 1–13


Juniper Networks Design—Data Center

Preparing and Studying


The slide lists some options for those interested in preparing for Juniper Networks certification.

Chapter 1–14 • Course Introduction www.juniper.net


Juniper Networks Design—Data Center

Junos Genius
The Junos Genius application takes certification exam preparation to a new level. With Junos Genius you can practice for
your exam with flashcards, simulate a live exam in a timed challenge, and even build a virtual network with device
achievements earned by challenging Juniper instructors. Download the app now and Unlock your Genius today!

www.juniper.net Course Introduction • Chapter 1–15


Juniper Networks Design—Data Center

Find Us Online
The slide lists some online resources to learn and share information about Juniper Networks.

Chapter 1–16 • Course Introduction www.juniper.net


Juniper Networks Design—Data Center

Any Questions?
If you have any questions or concerns about the class you are attending, we suggest that you voice them now so that your
instructor can best address your needs during class.

www.juniper.net Course Introduction • Chapter 1–17


Juniper Networks Design—Data Center

Chapter 1–18 • Course Introduction www.juniper.net


Juniper Networks Design—Data Center

Chapter 2: Overview of Data Center Design


Juniper Networks Design—Data Center

We Will Discuss:
• High level concepts about the different data center architectures;
• Features used to interconnect data centers;
• Key high level considerations about securing and monitoring a data center deployment; and
• Key high level concepts for implementing and enhancing performance in a data center.

Chapter 2–2 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

Initial Considerations

The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net Overview of Data Center Design • Chapter 2–3


Juniper Networks Design—Data Center

What Is a Data Center?


As the name implies, a data center is a center, or facility, in which data is collected (or stored) and processed. Data centers
come in many shapes and sizes but do share some common components, as shown on the slide.
The primary ingress and egress point for a data center along with the equipment used to provide those gateway services are
included in the WAN domain. The components, processes, and policies used to protect the data center and its resources are
part of the security domain. The systems and tools used to manage and monitor the data center and its operations are found
in the management domain. The compute and storage domain include the data assets, services, applications and the
compute and storage equipment required by the business and its users. The Layer 2 and Layer 3 infrastructure domain
include the devices, connections, and corresponding policies and protocols used to interconnect all other domains and to
facilitate communications between users and their targeted resources.

Chapter 2–4 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

Understanding the Current Trends


Many enterprise customers are updating their private clouds or moving their services to a public cloud provider or some
combination of both. Most customer organizations have somebody or some internal team looking at cloud services. Whether
it is a private cloud that they manage or a public cloud where they manage all or some of the network while somebody else is
responsible for hardware and facilities. It is important to understand these trends and become familiar with the different
data center environments so you can interpret the customer needs and provide them with a complete data center solution.

www.juniper.net Overview of Data Center Design • Chapter 2–5


Juniper Networks Design—Data Center

Applications are Driving Design Change


Data centers must be flexible and change as the users needs changes. This means that today’s data centers must evolve
and are becoming flatter simpler and more flexible in order to keep up with the constantly increasing end user demands.
Understanding why these changes are being implemented is important when trying to understand the needs of the
customer. There are a few reasons impacting this change including:
• Application Flows: More east-west traffic communication is happening in data centers. With todays
applications, many requests can generate a lot of traffic between devices in a single data center. Basically a
single user request triggers a barrage of additional request to other devices. Then go here, get this, then go
here get that, behavior of many applications is being done on such a large scale today that it is driving data
centers to become flatter and provide higher performance with consistency.
• Network Virtualization: This means overlay networks for example, NSX and Contrail. Virtualization is being
implemented in todays data centers and will continue to gain in popularity in the future. Some customer might
not be currently using virtualization in their data center, but it could definitely plays a role in your design for
those customers that are forward looking and eventually want to incorporate some level of virtualization.
• Everything as a service: To be cost effective, a data center offering hosting services must be easy to scale out
and scale back as demands change. The data center should be very agile and easy to deploy new services
quickly.

Chapter 2–6 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

Initial Design Considerations


There are four main design elements that should be considered with every data center.
1. Building and location of the data center: The location of your data center can be as important as the data
center itself. Many things influence the location for the data center. Power availability and cost are a big
concern and can vary a great deal depending on the geographical area. There can also be environmental
concerns that should be taken into account like frequent earthquakes, tornadoes or flooding.
2. Power: This is a major consideration for many data centers. There must be enough power to operate the entire
data center. A data center should also have some type power conditioning (UPS, etc.), and backup generation if
an extended power outage would result in a catastrophic business problem
3. Heating and cooling: The heating and cooling design is very important. The ideal layout for a data center should
position the racks in a Hot-Aisle/Cold-Aisle fashion, this method will maximize your cooling potential. There are
also few inexpensive and simple ways to maximize air flow through a data center rack including installing blank
panels and use a cable management system to keep cables from obstructing air flow.
4. Cabling: This might seem like an unimportant or simple piece of a data center design, but cabling can quickly
become a very large expense in a data center. Cable types and lengths can also restrict where you place your
switching devices within the data center racks. Poor cabling design and implementation can result in extended
outages due to difficulty in service and support.

www.juniper.net Overview of Data Center Design • Chapter 2–7


Juniper Networks Design—Data Center

Architectures and Design Considerations


The slide highlights the topic we discuss next.

Chapter 2–8 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

What Impacts Architectural Choices?


Many things can influence which architecture fits a customer’s data center needs including:
• Data center size: The number of required server ports are one of the major deciding factors for which data
center architecture you choose. This can be one of the first factors you use to narrow down the list of available
data center architectures.
• Required resources: Is there enough power, cooling capabilities, and rack space.
• Networking requirements: Are there networking restrictions or requirements within the data center like
overlapping VLAN assignments.
• Services being offered: Offering services requires a very flexible architecture that can easily scale up as
demand increases. Services must be able to be added and removed for customers easily using automation.
• Growth potential: A data center design should take into account the customers desires for growth. Some
architectures allow the addition of devices easily and require very little setup effort. You should plan for this
flexibility when deciding what architecture to use.
• Interoperability: In some cases you might need to update a data center where existing equipment must be
incorporated. This could be equipment from another vendor or even legacy Junos equipment.
As you become more familiar with the different architecture discussed, you will be able to quickly identify which architecture
you should propose based on the requirements for the data center.

www.juniper.net Overview of Data Center Design • Chapter 2–9


Juniper Networks Design—Data Center

Data Center Fabric Architectures


The graphic on the slide is designed to serve as a quick Juniper Networks data center architecture guide based strictly on the
access (server) ports needed. The scaling numbers provided are calculated based on access switches that have 96 available
server ports.

Chapter 2–10 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

Multitier using MC-LAG


This combination is the recommended deployment method if the data center requires a standard multitier architecture.
Multichassis link aggregation groups (MC-LAGs) are very useful in a data center when deployed at the access layer to allow
redundant connections to your servers as well as offers dual control planes. In addition to the access layer, MC-LAGs are also
commonly deployed at the core layer. When MC-LAG is deployed in an Active/Active fashion, both links between the attached
device and the MC-LAG peers are active and available for forwarding traffic. Using MC-LAG eliminates the need to run STP on
member links and depending on the design, can eliminate the need for STP all together.

www.juniper.net Overview of Data Center Design • Chapter 2–11


Juniper Networks Design—Data Center

Virtual Chassis Fabric


The Juniper Networks Virtual Chassis Fabric (VCF) provides a low-latency, high-performance fabric architecture that can be
managed as a single device. VCF is an evolution of the Virtual Chassis feature, which enables you to interconnect multiple
devices into a single logical device, inside of a fabric architecture. A VCF is constructed using a spine-and-leaf architecture. In
the spine-and-leaf architecture, each spine device is interconnected to each leaf device. A VCF supports up to 32 total
devices, including up to four devices being used as the spine.

Chapter 2–12 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

QFabric
The QFabric System is composed of multiple components working together as a single switch to provide high-performance,
any-to-any connectivity and management simplicity in the data center. The QFabric System flattens the entire data center
network to a single tier where all access points are equal, eliminating the effects of network locality and making it the ideal
network foundation for cloud-ready, virtualized data centers. QFabric is a highly scalable system that improves application
performance with low latency and converged services in a non-blocking, lossless architecture that supports Layer 2, Layer 3,
and Fibre Channel over Ethernet (FCoE) capabilities. The reason you can consider the QFabric system as a single system is
that the Director software running on the Director group allows the main QFabric system administrator to access and
configure every device and port in the QFabric system from a single location. Although you configure the system as a single
entity, the fabric contains four major hardware components. The hardware components can be chassis-based, group-based,
or a hybrid of the two.

www.juniper.net Overview of Data Center Design • Chapter 2–13


Juniper Networks Design—Data Center

Junos Fusion
Junos Fusion is a Juniper Networks Ethernet fabric architecture designed to provide a bridge from legacy networks to
software-defined cloud networks. With Junos Fusion, service providers and enterprises can reduce network complexity and
operational costs by collapsing underlying network elements into a single, logical point of management. The Junos Fusion
architecture consists of two major components: aggregation devices and satellite devices. With this structure it can also be
classified as a spine and leaf architecture. These components work together as a single switching system, flattening the
network to a single tier without compromising resiliency. Data center operators can build individual Junos Fusion pods
comprised of a pair of aggregation devices and a set of satellite devices. Each pod is a collection of aggregation and satellite
devices that are managed as a single device. Pods can be small—for example, a pair of aggregation devices and a handful of
satellites—or large with up to 64 satellite devices based on the needs of the data center operator.

Chapter 2–14 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

IP Fabric
An IP Fabric is one of the most flexible and scalable data center solutions available. Because an IP Fabric operates strictly
using Layer 3, there are no proprietary features or protocols being used so this solution works very well with data centers
that must accommodate multiple vendors. One of the most complicated tasks in building an IP Fabric is assigning all of the
details like IP addresses, BGP AS numbers, routing policy, loopback address assignments, and many other implementation
details.

www.juniper.net Overview of Data Center Design • Chapter 2–15


Juniper Networks Design—Data Center

Connecting Data Centers


The slide highlights the topic we discuss next.

Chapter 2–16 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

What is a Data Center Interconnect?


Data center interconnect (DCI) is basically a method to connect multiple data centers together. As the name implies, a
Layer 3 DCI uses IP routing between data centers while a Layer 2 DCI extends the Layer 2 network (VLANs) from one data
center to another.
Many of the DCI communication options rely on an MPLS network to transport frames between data centers. Although in
most cases an MPLS network can be substituted with an IP network (i.e., by encapsulating MPLS in GRE), there are several
advantages to using an MPLS network including availability, cost, fast failover, traffic engineering, and scalable VPN options.

www.juniper.net Overview of Data Center Design • Chapter 2–17


Juniper Networks Design—Data Center

Layer 2 Options
Three classifications exist for Layer 2 DCIs:
1. No MAC learning by the Provider Edge (PE) device: This type of layer 2 DCI does not require that the PE devices
learn MAC addresses.
2. Data plane MAC learning by the PE device: This type of DCI requires that the PE device learns the MAC
addresses of both the local data center as well as the remote data centers.
3. Control plane MAC learning - This type of DCI requires that a local PE learn the local MAC addresses using the
control plane and then distribute these learned MAC addressed to the remote PEs.

Chapter 2–18 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

Layer 3 Options
A Layer 3 DCI uses routing to interconnect data centers. Each data center must maintain a unique IP address space. A
Layer 3 DCI can be established using just about any IP capable link. Another important consideration for DCIs is
incorporating some level of redundancy by using link aggregation groups (LAGs), IGPs using equal cost multipath, and BGP or
MP-BGP using the mutipath or multihop features.

www.juniper.net Overview of Data Center Design • Chapter 2–19


Juniper Networks Design—Data Center

Security and Operation


The slide highlights the topic we discuss next.

Chapter 2–20 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

Why Is Security Important?


Data center security is an integral part of a design proposal and is extremely important because you are protecting the
physical devices, personal information, and intellectual property of the customer. When dealing with security there are many
aspect that must be considered. The physical security of the data center can seem unimportant but it is your first line of
defense. The building and data center should be secured and access should be limited to a few trust worthy employees. This
might not be part of your design proposal, but should be discussed with the customer. In the data center it is important to
ensure all aspects are protected including securing the network, devices, and data being stored without significantly
impacting performance.

www.juniper.net Overview of Data Center Design • Chapter 2–21


Juniper Networks Design—Data Center

Security Deployment Strategies


When deploying security devices there are generally two approaches:
• Inline: Place the SRX Series device in the path of all traffic that is entering and leaving the data center. Typically
this device is between your switches and the WAN devices. This placement ensures traffic entering and leaving
your data center is always protected through the firewall services. One of the advantages of this deployment is
that it reduces the number of interfaces required for firewalls to connect to the adjacent devices. One interface
connects to each of the core-aggregation (spine) devices, while the other interfaces connect to the WAN edge
devices.
• One-arm: A one-arm firewall deployment is another physical approach for firewall deployment in the data center
architecture. An SRX Series device can be off to the side and is typical connected to the core-aggregation
devices (spine). This deployment enables the administrator to either inspect all traffic or all select traffic to
bypass the firewall inspection.
There are many unique variations that can be implemented using these two approaches. Your actual design and device
placement will depend on what traffic needs to be secured and what direction this traffic is traveling.

Chapter 2–22 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

Traffic Patterns
Understanding how traffic flows through and within your data center is key to knowing how and where your security devices
need to be placed. For instance, if you know that there will be a large amount of VM to VM or even server to server traffic
(also referred to as east-west traffic) that must be secured, you can consider adding a virtual firewall (vSRX) to inspect this
traffic and apply security policies as needed. If you are strictly concerned with north-south traffic then using a physical
SRX Series device in the network, using one of the previously outlined methods, is ideal.
Another key aspect of your security design is creating logical separation between areas in the data center by defining and
implementing security zones. Then policies can be used to control traffic that is allowed into one zone but not into another.
Basically, you take a trusted versus untrusted approach to traffic flowing through your network. Another method of
segmentation is using virtual routing instances to create logical separation in your network. This approach allows you to
continue to apply security policies while completely separating traffic using the firewall instead of other devices in the
network.

www.juniper.net Overview of Data Center Design • Chapter 2–23


Juniper Networks Design—Data Center

What Is Security Intelligence?


Protecting a cloud data centers from advanced malware and other threats requires a new way of thinking about network
defenses and companies must focus on detecting attacks and attackers early on. Security Intelligence is a security
framework that protects web servers against evolving security threats by employing threat detection software, both local and
cloud-based security information, and control software with a next-generation firewall system. The Spotlight Secure
cloud-based threat intelligence feeds provide a stream of information about evolving threats that is gathered, analyzed, and
prioritized by Juniper Networks from multiple collection points and this threat information is used to identify potential threats
to the data center.

Chapter 2–24 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

SDN and Virtualization


Juniper has a very complete SDN strategy which is described as the 6-4-1 SDN strategy. This 6-4-1 SDN strategy consists of
six general principles (Separate, Centralize, Use the cloud, Common platform, Standardize, and Apply Broadly), four steps
(Centralize management, Extract services, Centralize controller and Optimize the hardware), and one licensing model.
Juniper’s Contrail is a simple, open, and agile SDN solution that automates and orchestrates the creation of highly scalable
virtual networks. These virtual networks let you harness the power of the cloud—for new services, increased business agility,
and revenue growth. Contrail is an extensible system that can be used for multiple networking use cases but there are two
primary drivers in data centers:
• Elastic Cloud Networking—Private clouds for enterprises or service providers, Infrastructure as a Service (IaaS)
and Virtual Private Clouds (VPCs) for Cloud service providers. Uses contrails network programmability to
understand and translate abstract commands into specific rules and policies that automate provisioning of
workloads, configure network parameters, and enable automatic chaining of services. This concept hides
complexities and low level details of underlying elements like ports, virtual LANs (VLANs), subnets, and others.
• Network Function Virtualization (NFV)—Provides dynamic service insertion by automatically spinning up and
chaining together Juniper and third party service instantiation that dynamically scales out with load balancing.
This concept reduces service time-to-market, improving business agility and mitigating risk by simplifying
operations with a more flexible and agile virtual model.
Virtualization has become beneficial for data centers, it has enabled them to increase profits with keeping costs in control.
Many data centers are virtualizing many services and application which causes challenges with securing and routing this
traffic between VMs. Juniper Networks has virtualized their SRX and MX platform to deliver their capabilities in a virtualized
environment.

www.juniper.net Overview of Data Center Design • Chapter 2–25


Juniper Networks Design—Data Center

Data Center Operation


An important part of designing a good data center is considering the ongoing management and operation of its devices and
health. In order to efficiently manage you data center you should define and incorporate network standards including device
access, device naming conventions, rack layouts and Junos versions. This will make the ongoing tasks associated with
ensuring the data center is healthy and operating at peak performance much easier.
Another important point to consider is, how do I know what is normal for my data center? This is not an easy question to
answer and is different for all data centers. It is important to know what normal is in order to determine if something is
abnormal. Once the data center is in production you can determine what the baseline is by using tools designed to monitor
you network’s health including Junos Space, Juniper Secure Analytics and automation scripts. You might need to establish a
new data center baseline multiple times throughout the life cycle of a data center as new services or devices are added.

Chapter 2–26 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

Junos Space
Data center operation can be greatly simplified by using Junos Space. Junos Space is a comprehensive network
management solution that simplifies and automates management of Juniper Networks switching, routing, and security
devices. With all of its components working together, Junos Space offers a unified network management and orchestration
solution to help you more efficiently manage the data center.

www.juniper.net Overview of Data Center Design • Chapter 2–27


Juniper Networks Design—Data Center

Juniper Secure Analytics


Juniper Secure Analytics (JSA) is a unique log management, security event management, and network anomaly behavior
detection (NBAD) solution combined into one device. JSA can be used to effectively identify and mitigate security threats in a
timely and efficient manner.

Chapter 2–28 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

Automation
Junos automation is part of the standard Junos OS available on all switches, routers, and security devices running Junos OS.
Junos automation can be used to automate operational and configuration tasks on a network’s Junos devices. The slide
highlights both on-box and off-box automation capabilities. including support for multiple scripting languages.

www.juniper.net Overview of Data Center Design • Chapter 2–29


Juniper Networks Design—Data Center

Implementation Considerations
The slide highlights the topic we discuss next.

Chapter 2–30 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

Implementation Plan
Implementation planning is more that just bringing the new data center into production. It requires a lot planning and should
be very comprehensive. As part of your implementation planning process you should also consider the traffic prioritization,
FCoE requirements, and high availability needs of the customer.

www.juniper.net Overview of Data Center Design • Chapter 2–31


Juniper Networks Design—Data Center

Common Migration Methodology


Juniper Networks follows a common methodology when migrating a data center, regardless of the size or scope of the
project. The methodology requires that you not only understand the current state of the customer’s network, but that you
also understand what the customer desires when the project is complete. The slide highlight a few of the common tasks
associated with each of the four major steps in the data center migration process.

Chapter 2–32 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

Traffic Prioritization

Each application in the data center can have different requirements. Certain applications do not allow for the loss of packets
or delay in packet delivery. This makes it important to classify this traffic with a higher priority that other applications that
have mechanisms to accommodate some level of loss or delay. If all traffic was treated the same, you would experience
unnecessary problems with these higher priority applications. By classifying and prioritizing the different applications in your
data center you ensure that traffic is handled in the most efficient manner possible. Most importantly, the same traffic
prioritization rules must be applied throughout the data center to ensure proper handling.

www.juniper.net Overview of Data Center Design • Chapter 2–33


Juniper Networks Design—Data Center

What is I/O Convergence?


Data center bridging (DCB) is a set of enhancements to the IEEE 802.1 bridge specifications. DCB modifies and extends
Ethernet behavior to support I/O convergence in the data center. I/O convergence includes but is not limited to the transport
of Ethernet LAN traffic and Fibre Channel (FC) storage area network (SAN) traffic on the same physical Ethernet network
infrastructure. A converged architecture saves cost by reducing the number of networks and switches required to support
both types of traffic, reducing the number of interfaces required, reducing cable complexity, and reducing administration
activities. The Juniper Networks QFX Series support the DCB features required to transport converged Ethernet and FC traffic
while providing the class-of-service (CoS) and other characteristics FC requires for transmitting storage traffic by. The
supported features include:
• A flow control mechanism called priority-based flow control (PFC) designed to help provide lossless transport.
• A discovery and exchange protocol for conveying configuration and capabilities among neighbors to ensure
consistent configuration across the network, called Data Center Bridging Capability Exchange protocol (DCBX),
which is an extension of Link Layer Discovery Protocol (LLDP).
A bandwidth management mechanism called enhanced transmission selection (ETS).

Chapter 2–34 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

High Availability

High availability should be a major consideration within the data center.It is important to maintain reachability to all services
in a data center regardless of individual failure. Failures will happen in a data center, but you must ensure this does not
affect the user experience. The slide illustrated some of the high availability features that should be considered while
designing a data center.

www.juniper.net Overview of Data Center Design • Chapter 2–35


Juniper Networks Design—Data Center

We Discussed:
• High level concepts about the different data center architectures;
• Features used to interconnect data centers;
• Key high level considerations about securing and monitoring a data center deployment; and
• Key high level concepts for implementing and enhancing performance in a data center.

Chapter 2–36 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

Review Questions
1.

2.

www.juniper.net Overview of Data Center Design • Chapter 2–37


Juniper Networks Design—Data Center
Answers to Review Questions
1.
The five data center architectures are multitier using MC-LAG, Virtual Chassis Fabric, QFabric, Junos Fusion, and IP Fabric.
2.
There are a few tools that can be used to manage a data center including Junos Space, Juniper Secure Analytics, and automation.

Chapter 2–38 • Overview of Data Center Design www.juniper.net


Juniper Networks Design—Data Center

Chapter 3: Initial Design Considerations


Juniper Networks Design—Data Center

We Will Discuss:
• Data center cooling design and considerations;
• Device placement and cabling requirements; and
• Different data center use cases including architectural choices.

Chapter 3–2 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

Physical Layout and Placement


The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net Initial Design Considerations • Chapter 3–3


Juniper Networks Design—Data Center

Physical Layout
One of the first steps in data center design is planning the physical layout of the data center. Multiple physical divisions exist
within the data center that are usually referred to as segments, zones, cells, or pods. Each segment consists of multiple rows
of racks containing equipment that provides computing resources, data storage, networking, and other services.
Physical considerations for the data center include placement of equipment, cabling requirements and restrictions, and
power and cooling requirements. Once you determine the appropriate physical layout, you can replicate the design across all
segments within the data center or in multiple data centers. Using a modular design approach improves the scalability of the
deployment while reducing complexity and easing data center operations.
The physical layout of networking devices in the data center must balance the need for efficiency in equipment deployment
with restrictions associated with cable lengths and other physical considerations. Pros and cons must be considered
between deployments in which network devices are consolidated in a single rack versus deployments in which devices are
distributed across multiple racks. Adopting an efficient solution at the rack and row levels ensures efficiency of the overall
design because racks and rows are replicated throughout the data center.
This section discusses the following data center layout options:
• Top-of-rack (ToR) or bottom-of-rack (BoR);
• Middle-of-row (MoR); and
• End-of-row (EoR).

Chapter 3–4 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

Top of Rack and Bottom of Rack


In a ToR and BoR deployment, network devices are deployed in each server rack. A single device (or pair of devices for
redundancy at the device level) provides switching for all of the servers in the same rack. To allow sufficient space for
servers, the general recommendation is that the devices in the rack should be limited to a 1U or 2U form factor.
A ToR or BoR layout places high-performance devices within the server rack in a row of servers in the data center. With
devices in close proximity, cable run lengths are minimized. Cable lengths can be short enough to accommodate 1 Gigabit
Ethernet, 10 Gigabit Ethernet, and future 40 Gigabit Ethernet connections. Potential also exists for significant power savings
for 10 Gigabit Ethernet connections when the cable lengths are short enough to allow the use of copper, which operates at
one-third the power of longer-run fiber cables.
Another possible rack deployment scenario, not mentioned on the slide, is a middle of rack. In some situations, the cable
lengths are very important and you could deploy the access switch in the middle of the rack. This deployment strategy would
require the longest cable in your rack to be about half the overall height of the rack.
Continued on the next page.

www.juniper.net Initial Design Considerations • Chapter 3–5


Juniper Networks Design—Data Center
Top of Rack and Bottom of Rack (contd.)
With ToR and BoR layouts, you can easily provide switching redundancy on a per rack basis. However, each legacy device
must be managed individually, which can complicate operations and add expense because multiple discreet 24 or 48 port
devices are required to meet connectivity needs. Both top of rack and bottom of rack deployments provide the same
advantages with respect to cabling and switching redundancy. Cabling run lengths are minimized in this deployment and are
simpler than MoR or EoR configurations. ToR deployments provide more convenient access to the network devices, while
BoR deployments can be more efficient from an airflow and power perspective, because cool air from under-floor heating
ventilation and cooling (HVAC) systems reaches the network devices in the rack before continuing to flow upward.
ToR and BoR deployments do have some disadvantages, however. Having many networking devices in a single row
complicates topology and management. Because the devices serve only the servers in a single rack, uplinks are required for
connection between the servers in adjacent racks, and the resulting increase in latency can affect overall performance.
Agility is limited because modest increases in server deployment must be matched by the addition of new network devices.
Finally, because each device manages only a small number of servers, more devices are typically required than would
otherwise be needed to support the server population.

Chapter 3–6 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

End of Row
If the physical cable layout does not support ToR or BoR deployment, or if the customer prefers a large chassis-based
solution, the other options would be an EOR or MOR deployment, where network switches are deployed in a dedicated rack
in the row.
In the EoR configuration, which is common in existing data centers with existing cabling, high-density switches are placed at
the end of a row of servers, providing a consolidated location for the networking equipment to support all of the servers in
the row. EoR configurations can support larger form factor devices than ToR and BoR rack configurations, so you end up with
a single access tier switch to manage an entire row of servers. EoR layouts also require fewer uplinks and simplify the
network topology—inter-rack traffic is switched locally. Because EoR deployments require cabling over longer distances than
ToR and BoR configurations, they are best for deployments that involve 1 Gigabit Ethernet connections and relatively few
servers.
Disadvantages of the EoR layout include longer cable runs which can exceed the length limits for 10 Gigabit Ethernet and 
40 Gigabit Ethernet connections, so careful planning is required to accommodate high-speed network connectivity. Device
port utilization is not always optimal with traditional chassis-based devices, and most chassis-based devices consume a
great deal of power and cooling, even when not fully configured or utilized. In addition, these large chassis-based devices
can take up a great deal of valuable data center rack space.

www.juniper.net Initial Design Considerations • Chapter 3–7


Juniper Networks Design—Data Center

Middle of Row
A MoR deployment is similar to an EOR deployment, except that the devices are deployed in the middle of the row instead of
at the end. The MoR configuration provides some advantages over an EoR deployment, such as the ability to reduce cable
lengths to support 10 Gigabit Ethernet and 40 Gigabit Ethernet server connections. High-density, large form-factor devices
are supported, fewer uplinks are required in comparison with ToR and BoR deployments, and a simplified network topology
can be adopted.
You can configure an MoR layout so that devices with cabling limitations are installed in the racks that are closest to the
network device rack. While the MoR layout is not as flexible as the a ToR or BoR deployment, the MoR layout supports
greater scalability and agility than the EoR deployment.
Although minimizing the cable length disadvantage associated with EoR deployments, the MoR deployment still has the
same port utilization, power, cooling, and rack space concerns associated with an EoR deployment.

Chapter 3–8 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

Environmental Conditions
The slide highlights the topic we discuss next.

www.juniper.net Initial Design Considerations • Chapter 3–9


Juniper Networks Design—Data Center

Hot and Cold Aisle Design


A critical element to minimizing power consumption in the data center is the concept of hot aisles and cold aisles. The idea
is to keep the cool air supplied to the equipment separate from the hot air exhausted from the equipment. Data center
devices are racked so that cool air is drawn into the equipment on a common “cold aisle” where the cool air is delivered. The
other side of the rows create a common “hot aisle” into which the hot air from the equipment is exhausted. The hot air can
then be drawn into the air conditioning equipment, cooled, and redistributed into the cold aisle.
It is desirable to have as much separation as possible between the cool air supplied to the devices and the hot air exhausted
from the devices. This separation makes the cooling process more efficient and provides more uniformity of air temperature
from the top to the bottom of the racks, preventing “hot spots” within the data center. Physical barriers above and around
racks can be used to help achieve the desired separation. The racks can also help achieve the desired separation and air
flow.

Chapter 3–10 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

Enabling Hot Aisle / Cold Aisle Data Center Design


Products are available from several rack manufacturers that provide support for implementing hot aisle cold aisle designs in
data centers. For example, cabinets are available that take cold air in front of the rack, move it through the chassis with
specially designed baffles, and then expel hot air at the rear of the cabinet.
Cool air is often forced through perforated tiles in raised floors as a way of delivering cool air to the cold aisle. Plenums above
the racks are then used to vent the hot air for re-cooling. More recently, delivering the cold air through ducts and plenums
above the rack cabinets and exhausting the hot air through separate ductwork and plenums has been to used to take
advantage the natural tendency of cold air to fall and warm air to rise.
Some Juniper devices can be ordered with different air flows, meaning the air flows through the device in different
directions. These device come in two models, air flow in (AFI) and air flow out (AFO). On the AFI models the air flows
back-to-front which means the cool air enters through the rear of the device and exhausts out the front. In contrast, on the
AFO models the air flows front-to-back which means the cool air enters through the front of the device and exhausts out the
rear.

www.juniper.net Initial Design Considerations • Chapter 3–11


Juniper Networks Design—Data Center

Power Considerations
As old data center facilities are upgraded and new data centers are built, an important consideration is to ensure that the
data center network infrastructure is designed for maximum energy and space efficiency as well as having a minimal
environmental impact. Power, space, and cooling requirements of all network components must be accounted for and
compared with different architectures and systems so that the environmental and cost impacts across the entire data center
as a whole can be taken into consideration—even down to the lighting. Many times, the most efficient approach is to
implement high-end, highly scalable systems that can replace a large number of smaller components, thereby delivering
energy and space efficiency. Green initiatives that track resource usage, carbon emissions, and efficient utilization of
resources such as power and cooling are to be considered when designing a data center.
Fewer devices require less power, which in turn reduces cooling requirements, thus adding up to substantial power savings.

Chapter 3–12 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

Physical Plant Limitations and Efficiencies: Part 1


The challenge of physical plant limitations is the most fundamental challenge data center managers face. Every data center
has length, width, height, and power supply and cooling capacity limitations within which it must exist. As server and
computing power becomes denser in multicore and blade chassis server configurations, saving space becomes more
beneficial. The more computing power that can be housed in the same amount of rack space, the lower the square-footage
requirement. In every case, designers can achieve some equilibrium to accommodate the overall growth in computing
capacity required by our increasingly automated, interconnected world. With this increased density comes increased power
and cooling requirements. Sometimes these requirements create the need for a redesign or upgrade of existing electrical
and cooling infrastructures to support the increased density.
In today’s world, taking every step possible to minimize power draw for the required functionality becomes a critical goal. As
such, data center operators look at every opportunity to achieve this goal, searching for the most capable equipment with
increased functionality and occupying the minimum in rack space, power, and cooling systems.

www.juniper.net Initial Design Considerations • Chapter 3–13


Juniper Networks Design—Data Center

Physical Plant Limitations and Efficiencies: Part 2


Physical plant limitations have led to customers using capacity, power, and cooling metrics as an important part of
equipment selection. Using equipment that meets these needs and working with suppliers who are well versed in addressing
these needs is essential. Integrating the designs into an overall data center view of power consumption, air flow, and cooling
becomes an important part of the design objectives. Knowing what the hot and cold aisle assignments are, and designing
equipment placement to meet efficiency objectives, is also essential.

Chapter 3–14 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

Energy Efficiency
Constraints in the amount of power available to support the data center infrastructure, concurrent with the increase in
demand for applications that organizations are experiencing, make designing data centers with a minimum amount of power
consumed per unit of work performed imperative. Every piece of the data center infrastructure matters in energy
consumption.
Juniper Networks have started building energy efficiency techniques into its entire product design line. The Energy
Consumption Rating (ECR) Initiative, formed by multiple organizations in the networking industry to create a common metric
to compare energy efficiency, provides a compelling perspective. This group has defined the energy efficiency ratio (EER) as
a key metric. EER measures the number of gigabits per second of traffic that can be moved through a network device per
kilowatt of electricity consumed—the higher the number, the greater the efficiency. Different designs achieve different
ratings based on this metric.

www.juniper.net Initial Design Considerations • Chapter 3–15


Juniper Networks Design—Data Center

Cabling Options
The slide highlights the topic we discuss next.

Chapter 3–16 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

Data Center Cabling


Cable installation is a major cost in data centers due to the labor involved in pulling cables through conduits and cable trays
and, to varying degrees, the price of the cabling itself. Organizations install different types of cabling in different parts of the
data center based on factors such as the equipment being connected, the bandwidth required by a particular device or link,
and the distances between the connected devices. Organizations often try to install sufficient cabling to accommodate
future expansion. However, any major change to a data center—such as upgrading to higher-performance servers or moving
to a higher-speed core—can result in the need to run new cabling.
Cabling runs basically everywhere in the data center, both within tiers and between them. Cabling runs within racks to
connect servers and other appliances to each other and to their networks, between racks and the access switches, between
the switches in the access tier, between access switches and switches in the aggregation or core tiers, between devices in
the core, and between core devices and edge equipment housed in the telecom room.
The table on the slide summarizes data center cabling infrastructure, showing the data center network tiers, the devices
connected within them (for example, WAN routers, storage area network [SAN] devices or network attached storage [NAS]
devices) and the type of cable typically used.

www.juniper.net Initial Design Considerations • Chapter 3–17


Juniper Networks Design—Data Center

Planning for 40-Gigabit Ethernet and 100-Gigabit Ethernet


Several trends are driving the need for higher bandwidth throughout the data center. For example, dual-port 10 Gigabit
Ethernet network interface cards (NICs) for servers are getting cheaper, so they are being deployed more often. The use of 
10 Gigabit Ethernet in the equipment distribution area is driving the need for 40 Gbps to 100 Gbps links in the access,
aggregation, and core tiers. To date, network equipment vendors have been delivering speeds of 40 Gbps and 100 Gbps by
using variants of a technique called wavelength-division multiplexing (WDM). For example, using WDM, vendors create a link
that is essentially four 10 Gbps signals combined onto one optical medium. Similarly, 100 Gbps links can be composed of
four 25 Gbps or 10 10 Gbps channels.
In 2007, the Institute of Electrical and Electronics Engineers (IEEE) began the process of defining standards for40 Gigabit
Ethernet and 100 Gigabit Ethernet communications, which were ratified in June 2010. These 40 Gigabit Ethernet and 
100 Gigabit Ethernet standards encompass a number of different physical layer specifications for operation over single
mode fiber (SMF), OM3 multimode fiber (MMF), copper cable assembly, and equipment backplanes (see the table on the
slide for more details). To achieve these high Ethernet speeds, the IEEE has specified the use of ribbon cable, which means
that organizations need to pull new cable in some or all parts of the data center, depending on where 40 Gigabit Ethernet or 
100 Gigabit Ethernet is needed.

Chapter 3–18 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

Cabling Options Compared


The table on the slide gives you an overview of data center fiber cabling options. Coarse wavelength division multiplexing
(CWDM) and parallel optics SMF and MMF transceiver types are compared, showing the associated costs and distance
limitations.

www.juniper.net Initial Design Considerations • Chapter 3–19


Juniper Networks Design—Data Center

Transceiver and Cable Type


The illustration on the slide shows the 10 Gigabit Ethernet, 40 Gigabit Ethernet, and 100 Gigabit Ethernet transceiver and
fiber cable types that are used in the data center. With advances in technology the data center is moving beyond 10 Gigabit
Ethernet using MMF or SMF with small form-factor pluggable plus (SFP+) or 10 Gigabit small form-factor pluggable (XFP)
transceivers. The advent of 40 Gigabit Ethernet and 100 Gigabit Ethernet speeds has introduced new connectivity options.
Mechanical transfer pull-off (MTP) is a special type of fiber optic connector made by a company named US Conec. MTP is an
improvement of the original multifiber push-on (MPO) connector designed by a company named NTT. The MTP connector is
designed to terminate several fibers strands—up to 24 strands—in a single ferrule. MTP connections are held in place by a
push-on pull-off fastener, and can also be identified by a pair of metal guide pins that project from the front of the connector.
Multimode transceiver types for 40 and 100 Gigabit Ethernet MTP connections include quad small form-factor pluggable
plus (QSFP+) and CXP. CXP was designed for data centers where high-density 100 Gigabit connections will be needed in the
future (the C stands for the Roman numeral for 100). 40 and 100 Gigabit Ethernet single-mode fiber connections use 100
Gigabit small form-factor pluggable (SFP) transceivers with LC connection fiber pairs.

Chapter 3–20 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

Future-Proofing Data Center Cabling


Cabling is a high cost item because it is labor insensitive. To maximize the cabling dollar, your customer should try to
future-proof the cable plant, which is possible because 40 Gigabit and 100 Gigabit use the same cabling guidelines. The
only difference is that 100 Gigabit requires twice as many fibers. Specify a minimum of OM3 fiber and OM4 if extra reach is
needed.
A data center should be designed with a maximum cabling distance of 100 to 150 meters between switches. The 100 to 150
meter length limit is part of the 40 Gigabit and 100 Gigabit specification for multimode fiber. The longest length supported,
150 meters, assumes the customer uses OM4 cabling and no more than two patch panels are in the path. If more than two
patch panels are present, then OM4 is limited to 125 meters. If using OM3 fiber, use 100 meters as the maximum length.
It is recommend that your customers run a minimum of two 12-strand fiber cables. Running two 12-strand fiber cables will
put them in a position to move to 100 Gigabit, which uses 24 fiber strands. Running two 12-strand fiber cables provides two 
40 Gigabit connections today, or one 100 Gigabit connection in the future. If your customers plan to run two 100 Gigabit
connections in the future, then have them run four 12-strand cables today. MTP (or MPO) connectors will become the
standard transceiver interface, compared to LC connectors.
A patch panel should be by the Main Distribution Area (MDA) and patch panels in each row. Structured cabling should be
used between the patch panels. We recommend running large fiber bundles between patch panels for cost savings.
Obtaining all cabling components from the same manufacturer is important and a life-time or multiyear guarantee should
come with the installation. Using the same manufacturer and having a guarantee in place is important because fiber plants
have always run the risk of having polarity issues and with the increased speeds of 40-Gigabit Ethernet and 100-Gigabit
Ethernet, polarity issues become more pronounced and troublesome. Manufacturers specifically design their components to
work together to avoid these issues.

www.juniper.net Initial Design Considerations • Chapter 3–21


Juniper Networks Design—Data Center

Data Center Use Cases


The slide highlights the topic we discuss next.

Chapter 3–22 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

Data Center Use Cases: Part 1


In the typical Software as a Service (SaaS) environment (Large/Medium Scale) applications are purpose built for the cloud
meaning that the Layer 2 constraints, bandwidth intolerances, multitenancy, and other general network functions are
handled mostly by the application itself. These applications are generally owned by the SaaS provider and as such are built
to be flexible and scalable without need for much network intervention. Companies like Salesforce, Dropbox and Zynga are
examples of SaaS data centers. The typical design would be a Layer 3 Leaf/Spine fabric that allows the network to scale out
while the “Cloud enabled” applications take care of the typical network nuances. Another example of a typical application
would be Gmail, this is not an application that is hosted on 1000 servers, it is 10s of thousands of servers and must scale;
therefore, you must look at a Layer 3 design simply because Layer 2 cannot handle the scaling requirements.
In an Infrastructure as a Service (IaaS) data center, the infrastructure is provided to customers. The key with IaaS is really
where multitenancy happens. If it happens on the server this means you’re dealing with virtual networks and a virtual
networks scale can be very massive (up to 10s of thousands or 100s of thousands of hosts), so you need an SDN controller
to manage that type of environment. Because you are using virtual networking the IaaS infrastructure behaves more like a
transport network even though you have a massive multitenant environment because the infrastructure is mostly unaware of
the multitenancy. If the multitenancy happens at the Physical Layer or Network Layer, meaning you do not have virtual
networks, or if you are not terminating the customer network inside a host, then you are providing the segmentation in the
Physical Network, which will be visible to the network. In either case, if you use an SDN controller or more traditional
methods of providing multitenancy, there will be slightly different requirements on the network.

www.juniper.net Initial Design Considerations • Chapter 3–23


Juniper Networks Design—Data Center

Data Center Use Cases: Part 2


In the Bare Metal as a Service (BMaaS) environment customers prefer single tenant environment at least at a server level.
The network infrastructure is multitenant but at the server level it is single-tenant. BMaaS also gives customers the flexibility
to choose their type of virtualization (KVM, ESX). It gives them a level of flexibility that they would not otherwise have. It also
allows hosting providers to give their customers more flexibility by letting them determine what goes on a specific compute
node.
In a Telco for network function virtualization (NFV) environment, you would definitely require an SDN controller and also a
service orchestrator. Maestro, Heat (OpenStack) can spin individual services up on demand as opposed to entire VMs on
demand (from an overlay perspective). Incorporating an open, standard based controller and technologies avoids a single
vendor lock-in situation.Contrail’s robust L3VPN overlay architecture provide seamless integration with the service provider’s
existing L3VPN network infrastructure.

Chapter 3–24 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

Data Center Use Cases: Part 3


The goal of a private cloud is to build efficient multitenant service oriented data centers that are flexible and can be
provisioned on demand. In Private cloud data centers the services provided depends on the requirements. It could be
Anything as a Service (XaaS), but it resides on the customer’s premises due to compliance and security reasons. Everything
else is the same, only on a smaller scale.
A traditional enterprise IT data center is not on demand, its more of an operational model. With traditional IT networks you
can have the multitenancy, but you don’t have the self service portals or on demand services. Enterprise IT is more of a
traditional IT model in that aspect, where you don’t have an SDN controller so you want to provide some level of simplicity in
another way by using Junos Fusion or VCF as the infrastructure.

www.juniper.net Initial Design Considerations • Chapter 3–25


Juniper Networks Design—Data Center

Large Scale SaaS Data Centers


Large scale SaaS data centers are built to handle massive scale. The typical deployment is 1,000s of racks using an 5-stage
multistage Clos topology. Performance is critical and the majority of the servers are bare metal. The customer-facing
applications are developed in-house and use a scale-out architecture that makes it extremely resilient to failure. Applications
are generally owned and controlled by the provide which allows the provider to predict and dictate the behavior in the data
center.
The edge devices in the sample topology are connected to leaf devices. This is strictly a design choice and could easily be
connected to the spine devices. If you have a lot of spine devices, you might choose to connect your edge devices to the leaf
tier because each spine needs a connection to the edge devices. The number of required connections can quickly become
unmanageable. Many of the Use cases represented in the section will show the edge devices connected through the leaf tier,
but the edge devices could easily be attached to the spine.

Chapter 3–26 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

Mid Scale SaaS Data Centers


A mid scale SaaS data center is smaller in scale compared to the previously discussed large scale (generally 100s of racks
versus 1000s of racks), but share many of the same requirements. Infrastructure is built using a scale-out IP architecture
(generally a 3-stage Clos) and the end-user value is placed in the application delivery. The provider might not own all
application stacks and therefore might need an overlay for Layer 2 as opposed to large scale data centers, where the
provider owns the entire software stack. OpenStack can be used to provide orchestration and Ansible, Puppet, and Chef can
be used automation. Chef runs in a 30-minute window so using Ansible (which executes directly) for orchestration is
preferred for some customers, although both work well.

www.juniper.net Initial Design Considerations • Chapter 3–27


Juniper Networks Design—Data Center

Large Scale IaaS Data Centers


Large scale IaaS companies generally have 1,000s of racks. These companies use proprietary implementations and tools.
The architecture should be a scale-out IP or MPLS Fabric, generally using an overlay network on top that’s invisible to the
network substrate like VXLAN, VRFs, VRF-Lite and L3VPNs. Network orchestrated is generally accomplished using custom
software.

Chapter 3–28 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

BMaaS and Hosting Data Centers


Tenants of a BMaaS data center want dedicated servers because they do not want the possibility of another tenant’s
applications monopolizing server resources. This means that there is no multitenancy on the servers but there is
multitenancy in the data center infrastructure, This multitenancy is invisible to the tenant. Tenants, generally manage their
own servers and can be provided server access through the use of KVM or ESX. Bare metal cloud servers can be delivered
using a cloud-like service model but are not virtualized and do not run a hypervisor. Bare metal cloud eliminates the
overhead associated with virtualization but does not compromise flexibility, scalability and efficiency.

www.juniper.net Initial Design Considerations • Chapter 3–29


Juniper Networks Design—Data Center

Telco Cloud for NFV Data Centers


In the Telco for NFV use case brings a new level of service agility to carrier service providers. An SDN controller is critical for
this deployment. The extensive traffic-steering and service-chaining capabilities provide flexible multitenant network
services offerings in the Contrail powered cloud at each data center. Contrail’s robust L3VPN overlay architecture provide
seamless integration with the service provider’s existing L3VPN network infrastructure. The Underlay architecture looks very
similar to what we’ve already discussed and can be either IP or MPLS.

Chapter 3–30 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

Private Cloud Data Centers


Private Cloud data centers are a shared multitenant infrastructure that brings network virtualization, group based policies to
enterprise networks. The objective is to build efficient multitenant service oriented data centers that are elastic and can be
provisioned on demand. The cloud orchestration that is generally handled in private clouds using VMware, OpenStack, or
Azure Cloud system. Because private cloud data centers reside on premises, they are managed by the customer and not a
service provider.

www.juniper.net Initial Design Considerations • Chapter 3–31


Juniper Networks Design—Data Center

Enterprise IT
Enterprise customers represent a large potion of the Fortune 500 companies. This business model operates on project
lifecycles that requires a wide variety of technology to be delivered quickly. While they look to transition to private clouds,
many legacy workloads and applications exist that require a simple network infrastructure with familiar operations and a
single point of management.

Chapter 3–32 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

Summary of Use Case Architectures


Depending on the data center requirements there is an architecture that will fit the bill. Each architecture has been detailed
but as in most cases a blend of the architectures can be necessary for specific customer environments. As illustrated on the
slide many of the physical architectures are the same, but the services provided and where to establish boundaries between
provider and customer are where the real differences exist.

www.juniper.net Initial Design Considerations • Chapter 3–33


Juniper Networks Design—Data Center

We Discussed:
• Data center cooling design and considerations;
• Device placement and cabling requirements; and
• Different data center use cases including architectural choices.

Chapter 3–34 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

Review Questions
1.

2.

3.

www.juniper.net Initial Design Considerations • Chapter 3–35


Juniper Networks Design—Data Center
Answers to Review Questions
1.
The primary benefit is that MMF transceivers are cheaper than SMF transceivers.
2.
An end-of-row deployment allows you to incorporate a single access tier for an entire row of servers. This model requires fewer uplinks
and simplifies the network topology.
3.
An Ethernet fabric architecture is recommended for most enterprise IT data centers (VCF, QFabric, Junos Fusion).

Chapter 3–36 • Initial Design Considerations www.juniper.net


Juniper Networks Design—Data Center

Chapter 4: Traditional Data Center Architecture


Juniper Networks Design—Data Center

We Will Discuss:
• Traditional multitier data center architectures;
• Using link aggregation and redundant trunk groups; and
• Using multichassis link aggregation.

Chapter 4–2 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

Traditional Multitier Architecture


The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net Traditional Data Center Architecture • Chapter 4–3


Juniper Networks Design—Data Center

Multiple Tiers
Legacy data centers are often hierarchical and consist of multiple layers. The diagram on the slide illustrates the typical
layers, which include access, distribution (sometimes referred to as aggregation), and core. Each of these layers performs
unique responsibilities. We cover the functions of each layer on a subsequent slide in this section.
Hierarchical networks are designed in a modular fashion. This inherent modularity facilitates change and makes this design
option quite scalable. When working with a hierarchical network, the individual elements can be replicated as the network
grows. The cost and complexity of network changes is generally confined to a specific portion (or layer) of the network rather
than to the entire network.
Because functions are mapped to individual layers, faults relating to a specific function can be isolated to that function’s
corresponding layer. The ability to isolate faults to a specific layer can greatly simplify troubleshooting efforts.

Chapter 4–4 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

Functions of Layers
When designing a hierarchical data center network, individual layers are defined and represent specific functions found
within a network. It is often mistakenly thought that the access, distribution (or aggregation), and core layers must exist in
clear and distinct physical devices, but this is not a requirement, nor does it make sense in some cases. The layers are
defined to aid successful network design and to represent functionality that exists in many networks.
The slide highlights the access, aggregation, and core layers and provides a brief description of the functions commonly
implemented in those layers. If CoS is used in a network, it should be incorporated consistently in all three layers.

www.juniper.net Traditional Data Center Architecture • Chapter 4–5


Juniper Networks Design—Data Center

Benefits of Using Hierarchy


Data centers built utilizing a hierarchical design can bring some flexibility to designers:
• Since using a hierarchical design does not require the use of proprietary features or protocols, a multitier
topology can be constructed using equipment from multiple vendors.
• A multitier implementation allows flexible placement of a variety of switching platforms. The simplicity of the
protocols used does not require specific Junos versions or platform positioning.

Chapter 4–6 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

Challenges of Using Hierarchy


Data centers built more than a few years ago face one or more of the following challenges:
• The legacy multitier switching architecture cannot provide today’s applications and users with predictable
latency and uniform bandwidth. This problem is made worse when virtualization is introduced, where the
performance of virtual machines (VMs) depends on the physical location of the servers hosting those VMs.
• The management of an ever growing data center is becoming more and more taxing administratively speaking.
While the north to south boundaries have been fixed for years, the east to west boundaries have not stopped
growing. This growth, of the compute, storage, and infrastructure, requires a new management approach.
• The power consumed by networking gear represents a significant proportion of the overall power consumed in
the data center. This challenge is particularly important today, when escalating energy costs are putting
additional pressure on budgets.
• The increasing performance and densities of modern CPUs has led to an increase in network traffic. The
network is often not equipped to deal with the large bandwidth demands and increased number of media
access control (MAC) addresses and IP addresses on each network port.
• Separate networks for Ethernet data and storage traffic must be maintained, adding to the training and
management budget. Siloed Layer 2 domains increase the overall costs of the data center environment. In
addition, outages related to the legacy behavior of the Spanning Tree Protocol (STP), which is used to support
these legacy environments, often results in lost revenue and unhappy customers.
Given these challenges, along with others, data center operators are seeking solutions. This, interestingly enough, is where
you and your, soon to be refined, design skills come into play!

www.juniper.net Traditional Data Center Architecture • Chapter 4–7


Juniper Networks Design—Data Center

Consolidating Tiers
Clearly, more devices within a network architecture means more points of failure. Furthermore, a larger number of devices
introduces a higher network latency, which many applications today simply cannot tolerate. Most legacy chassis switches
add latencies of the order of 20 to 50 microseconds.
Today, we have extremely high-density 10GbE Layer 2 and Layer 3 switches, in the data center core. Many of these switches
have 100 Gbps or more of capacity per slot and over 100 line-rate 10GbE ports in a chassis. This enhanced performance
capability in data center switches has resulted in a trend of simplification throughout the entire data center. With this design
simplification, made possible by higher performing and more capable switches, you can eliminate the distribution tier of a
legacy three-tier data center network altogether.
Using a simplified architecture reduces latency through a combination of collapsed switching tiers, Virtual Chassis for direct
path from server to server, and advanced application-specific integrated circuit (ASIC) technologies. By simplifying the data
center architecture, you reduce the overall number of devices, which means the physical interconnect architecture is
simplified. By simplifying the overall design and structure at the physical layer, your deployment, management, and future
troubleshooting efforts are made easier. We examine these points in greater detail on subsequent slides.

Chapter 4–8 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

Resource Utilization
In the multitier topology displayed on the slide, you can see that almost half the links are not utilized. In this example you
would also need to be running some type of spanning tree protocol (STP) to avoid loops which would introduce a delay with
your network convergence as well as introduce significant STP control traffic taking up valuable bandwidth.
This topology is relatively simple but allows us to visualize the lack of resource utilization. Imagine a data center with a
hundred racks of servers with a hundred top of rack access switches. The access switches all aggregate up to the 
core/distribution switches including redundant connections. In this much larger and complicated network you would have
1000s of physical cable connections that are not being utilized. Now imagine these connections are fiber, in addition to the
unused cables you would also have two transceivers per connection that are not being used. Because of the inefficient use
of physical components there is a significant amount of usable bandwidth that is sitting idle. Later in this chapter we will look
at different technologies that can be used to alleviate some of these challenges including redundant trunk groups (RTGs)
and multichassis link aggregation (MC-LAG).

www.juniper.net Traditional Data Center Architecture • Chapter 4–9


Juniper Networks Design—Data Center

Loop Prevention Using STP: Part 1


STP is the fundamental Layer 2 protocol that is designed to prevent loops. The idea of a spanning tree is to prevent a
Physical Layer loop by blocking a port on the switch. To determine which port should be blocked, STP uses the STP algorithm.
The algorithm selects a port to be blocked based on the selection of the root bridge for the particular Layer 2 spanning-tree
domain. STP has come a long way, and it is a very stable protocol. Many customers are afraid of using it because of their
unpleasant past experiences with the protocol—time delays, slow convergence, and so forth.
Spanning-tree protocols have many varieties—from the older, conventional 802.1d STP to the newer varieties: IEEE 802.1w
RSTP, IEEE 802.1s MSTP, and VSTP.

Chapter 4–10 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

Loop Prevention Using STP: Part 2


All switches participating in a switched network exchange bridge protocol data units (BPDUs) with each other. Through the
exchanged BPDUs, neighboring switches become familiar with each other and learn the information needed to select a root
bridge. All switches participating on a common network segment must determine which switch offers the least-cost path
from the network segment to the root bridge. The switch with the best path becomes the designated bridge for the LAN
segment, and the port connecting this switch to the network segment becomes the designated port for the LAN segment. If
equal-cost paths to the root bridge exist from a given LAN segment, the bridge ID is used as a tiebreaker. If the bridge ID is
used to help determine the designated bridge, the lowest bridge ID is selected. The designated port transmits BPDUs on the
segment.
STP uses BPDU packets to exchange information between switches. Two types of BPDUs exist: configuration BPDUs and
topology change notification (TCN) BPDUs. Configuration BPDUs determine the tree topology of a LAN. STP uses the
information provided by the BPDUs to elect a root bridge, identify root ports for each switch, identify designated ports for
each physical LAN segment, and prune specific redundant links to create a loop-free tree topology. TCN BPDUs are used to
report and acknowledge topology changes within a switched network.

www.juniper.net Traditional Data Center Architecture • Chapter 4–11


Juniper Networks Design—Data Center

Loop Prevention Using STP: Part 3


Once the root bridge is elected, all nonroot devices perform a least-cost path calculation to the root bridge. The results of
these calculations determine the role of the switch ports. The role of the individual switch ports determines the port state.
All switch ports belonging to the root bridge assume the designated port role and forwarding state. Each nonroot switch
determines a root port, which is the port closest to the root bridge, based on its least-cost path calculation to the root bridge.
Each interface has an associated cost that is based on the configured speed. If a switch has two equal-cost paths to the root
bridge, the switch port with the lower port number is selected as the root port. The root port for each nonroot switch is placed
in the forwarding state.
STP selects a designated bridge on each LAN segment. This selection process is also based on the least-cost path
calculation from each switch to the root bridge. Once the designated bridge is selected, its port, which connects to the LAN
segment, is chosen as the designated port. If the designated bridge has multiple ports connected to the LAN segment, the
lower-numbered port participating on that LAN segment is selected as the designated port. All designated ports assume the
forwarding state. All ports not selected as a root port or as a designated port assume the blocking state. While in blocked
state the ports do not send any BPDUs. However, they listen for BPDUs.

Fully Converged
Once the role and state for all switch ports is determined, the tree is considered fully converged. The convergence delay can
take up to 50 seconds when the default forwarding delay (15 seconds) and max age timer (20 seconds) values are in effect.
The formula used to calculate the convergence delay for STP is 2 x the forwarding delay + the maximum age.

Chapter 4–12 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

Spanning Tree Protocol Varieties


The slide summarizes different varieties of spanning-tree protocols, identifying their advantages and disadvantages.
Obviously the convergence of the conventional 802.1d STP was unacceptable. It ranged from 35–50 seconds and in any
modern-day network that delay is simply unacceptable. As a result, the industry moved to RSTP and a more scalable MSTP,
both of which can work together and provide a faster convergence network. MSTP combines different VLANs sharing the
same topology into the same spanning tree instance, resulting in better scalability.

www.juniper.net Traditional Data Center Architecture • Chapter 4–13


Juniper Networks Design—Data Center

Link Aggregation and Redundant Trunk Groups


The slide highlights the topic we discuss next.

Chapter 4–14 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

What Is LAG?
Link aggregation (LAG) is a combination of multiple single physical links into one logical link.

Why Is LAG Important?


The advantage of link aggregation is twofold: it results in more aggregated bandwidth when compared to the individual link’s
bandwidth constituting link aggregation, and it provides the aggregate link’s reliability because it consists of multiple
physical links. If a Physical Layer failure of one of the link aggregation members occurs, the other members of the link will
continue to forward traffic, although at a reduced bandwidth. For example, you have three 1GbE interfaces combined into a
3GbE AE bundle. If one of those 1GbE interfaces fails, the interface (and bandwidth) is removed from the bundle but the
remaining 2 1GbE interfaces continue to forward traffic participating in the 2GbE AE bundle. Most networking vendors
support link aggregation technology (also referred to as bundling)—especially for connections between highly critical servers
and network entities.

www.juniper.net Traditional Data Center Architecture • Chapter 4–15


Juniper Networks Design—Data Center

What Is LACP?
The protocol that handles link aggregation is Link Aggregation Control Protocol (LACP). LACP is a standards-based Institute of
Electrical and Electronics Engineers (IEEE) 802.3ad protocol.

Why Is LACP Important?


LACP negotiates the bundling parameters between the two nodes. Some of those parameters include physical-link and
duplex settings. Once the negotiations are successful, the bundle starts forwarding traffic. The LAG results in load sharing,
not in load balancing. The load sharing is based on a predefined platform-dependent hashing mechanism. The hash could
be based on the source and destination IP addresses, media access control (MAC) addresses, or Layer 4 ports. The goal of
the hashing mechanism is to achieve as much load-share as possible.
If a single server or very few servers monopolize the entire bandwidth on the links, then that particular stream is more likely
concentrated on just one link. This likelihood is because the traffic monopolizing the bandwidth has the same source and
destination parameters, and the same Layer 4 and Layer 2 parameters. In such a situation, you could adjust the hashing
parameters. If such an adjustment still does not help, then the bundle can provide reliability to the link where, if one link
fails, the other link comes up in milliseconds and is ready to forward traffic.
Note that the maximum number of LAG bundles a device is capable of varies between products—check the platform-specific
documentation for any caveats, scaling numbers, and feature interoperability.

Chapter 4–16 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

What Is RTG?
Redundant trunk group (RTG) is a mechanism that allows a sub-second failover on access switches that are dual-homed to
the upper layer switches.

Why Is RTG Important?


The slide shows a switch that has a primary uplink and a secondary uplink. Assume that the primary link fails. The failure
causes the data link on the device to go down. If RTG is in use, the secondary link comes up immediately (within a few
milliseconds) resulting in almost immediate traffic forwarding. Typically, RTG is used only at the access tier. It is a feature, not
a protocol. Therefore, only the local switch, where RTG is configured, is aware of the RTG configuration.

www.juniper.net Traditional Data Center Architecture • Chapter 4–17


Juniper Networks Design—Data Center

RTG Example
The slide illustrates a typical topology example in which the RTG feature might be used. In this example, Switch C is
functioning as an access tier switch and has two, multihomed trunks connecting to Switch A and Switch B, which are
operating as core-aggregation tier switches. Switch C has an RTG configured and has the trunk connecting to Switch A set as
active, whereas the trunk connecting to Switch B is set as nonactive (or secondary). Because the RTG is configured, Switch C
cannot run STP or RSTP on the network ports that are participating in the RTG. All other ports can, if needed, participate in
STP or RSTP.

Chapter 4–18 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

RTG Considerations
The following are RTG design considerations:
• RTG and STP are mutually exclusive. The two uplinks—the primary active and the secondary standby uplinks
configured for RTG—do not run STP. However, we recommend that you still enable STP on the rest of the ports.
• From an architectural perspective, access tier switches typically connect to the upper layer—aggregation or core
switches. If, however, access switches interconnect to each other and RTG is used, then this design will cause a
loop because both access switches have RTG ports that are not running STP. Currently, Juniper Networks EX
Series and QFX Series platforms support RTG on the physical and aggregated Ethernet interfaces.

www.juniper.net Traditional Data Center Architecture • Chapter 4–19


Juniper Networks Design—Data Center

Multichassis Link Aggregation


The slide highlights the topic we discuss next.

Chapter 4–20 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

Data Center Connectivity


Many of today’s data centers carry mission critical traffic associated with mission critical business applications. Because of
the critical nature of the role of the data center in supporting businesses, every step is taken to ensure the supporting
compute and network elements are as resilient and fault-tolerant as possible. To help sustain a functioning data center
environment, design architects ensure redundant connections and paths exist and that as many single-point of failures are
eliminated from the environment as possible.
To ensure a single infrastructure switch does not significantly disrupt operations, redundant paths often with redundant
connections are placed throughout the various tiers in the Layer 2 network. To increase availability and ensure business
continuity continues when a single link, connected to compute resources, fails, multiple links are often bundled together in a
link aggregation group (LAG) using 802.3ad.

www.juniper.net Traditional Data Center Architecture • Chapter 4–21


Juniper Networks Design—Data Center

A Potential Problem
While operational continuity is a top priority, it is not guaranteed simply by adding multiple, bundled connections between
the compute resources (servers) and their attached access switch. This design, while improved over a design with a single
link, still includes potential single points of failure including the access switch and the compute device.
While the survivability of compute resources can be handled through the duplication of the impacted resources on some
other physical device in the network, typically done through virtualization technologies, the access switch, in this deployment
model, remains a single point of failure and prohibits the utilization of the attached resources.

Chapter 4–22 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

A Solution
To eliminate the access switch as being a single point of failure in the data center environment, you can use multichassis
link aggregation. Multichassis link aggregation builds on the standard LAG concept defined in 802.3ad and allows a LAG
from one device, in our example a server, to be spread between two upstream devices, in our example two access switches
to which the server connects. Using multichassis link aggregation avoids the single point of failure scenario related to the
access switches described previously and allows operational continuity for traffic and services, even when one of the two
switches supporting the server fails.

www.juniper.net Traditional Data Center Architecture • Chapter 4–23


Juniper Networks Design—Data Center

Common Positioning Scenarios


As previously mentioned, multichassis link aggregation groups (MC-LAGs) are very useful in a data center when deployed at
the access layer, which connects to the compute resources. These deployment scenarios at the access layer are performed
on select EX Series switches and the QFX Series switches, which are designed for and positioned in the data centers as
top-of-rack (ToR) switches. In addition to the access layer, MC-LAGs are also commonly deployed at the core layer, which is
commonly supported by the EX9200 Series switches or the MX Series routers.
While the illustrated scenarios are the most common deployments of MC-LAG in the data center, MC-LAGs have also been
used at the distribution layer and in some cases in both the north and south directions. Our focus throughout this chapter is
at the access layer.

Chapter 4–24 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

MC-LAG Overview
An MC-LAG allows two similarly configured devices, known as MC-LAG peers, to emulate a logical LAG interface which
connects to a separate device at the remote end of the LAG. The remote LAG endpoint may be a server, as shown in the
example on the slide, or a switch or router depending on the deployment scenario. The two MC-LAG peers appear to the
remote endpoint connecting to the LAG as a single device.
As previously mentioned, MC-LAGs build on the standard LAG concept defined in 802.3ad and provide node-level
redundancy as well as multihoming support for mission critical deployments. Using MC-LAGs avoids the single point of failure
scenario related to the access switches described previously and allows for operational continuity for traffic and services,
even when one of the two MC-LAG peers supporting the server fails.
MC-LAGs make use of the Interchassis Control Protocol (ICCP), which is used to exchange control information between the
participating MC-LAG peers. We discuss ICCP further on the next slides.

www.juniper.net Traditional Data Center Architecture • Chapter 4–25


Juniper Networks Design—Data Center

Interchassis Control Protocol Overview


ICCP, which uses TCP/IP, replicates control traffic and forwarding states across the MC-LAG peers and communicates the
operational state of the MC-LAG peers. Because ICCP uses TCP/IP to communicate between the MC-LAG peers, the MC-LAG
peers must be connected. ICCP messages exchange MC-LAG configuration parameters and ensure that both peers use the
correct LACP parameters. The connection used to support the ICCP communications is called the interchassis link-protection
link (ICL-PL).
The ICL-PL provides redundancy when a link failure (for example, an MC-LAG trunk or access port) occurs on one of the active
links. The ICL-PL can be a single Ethernet interface or an aggregated Ethernet interface with multiple member links. It is
highly recommended that the connection be no less than a 10-Gigabit Ethernet interface and ideally an aggregated Ethernet
interface with multiple member links to support the potential throughput requirements and incorporate fault tolerance and
high availability. You can configure only one ICL-PL between the two peers, although you can configure multiple MC-LAGs
between them which are supported by the single ICL-PL connection.
The ICL-PL should allow open communications between the associated MC-LAG peers. If the required communications are
prohibited and ICCP exchanges are not freely sent and received, instabilities with the MC-LAG might be introduced.

Chapter 4–26 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

ICL-PL High Availability


Using aggregated links for the ICL-PL, over which the ICCP peering session is established, mitigates the possibility of a
split-brain state. A split-brain state occurs when the ICL-PL configured between the MC-LAG peers goes down. To work around
this problem, you enable backup liveness detection. With backup liveness detection enabled, the MC-LAG peers can
communicate through the keepalive link. Backup liveness detection is disabled by default and requires explicit configuration.
It is recommended that the out-of-band (OOB) management connection be used as the keepalive link.
During a split-brain state, the standby peer brings down its local member links in the MC-LAG by changing the LACP system
ID. When the ICCP connection is active, both of the MC-LAG peers use the configured LACP system ID. If the LACP system ID
is changed during failures, the server that is connected over the MC-LAG removes the links associated with the standby
MC-LAG peer from its aggregated Ethernet bundle. Note that split-brain states bring down the MC-LAG link completely if the
primary peer member is also down for other reasons. Recovery from the split-brain state occurs automatically when the ICCP
adjacency comes up between the MC-LAG peers.
When the ICL-PL is operationally down and the ICCP connection is active, the LACP state of the links with status control
configured as standby is set to the standby state. When the LACP state of the links is changed to standby, the server
connected to the MC-LAG makes these links inactive and does not use them for sending data thereby forcing all traffic to
flow through the active MC-LAG peer. This behavior is to avoid any split-brain scenario where both MC-LAG peers operate in
an active role without having knowledge of the other peer.

www.juniper.net Traditional Data Center Architecture • Chapter 4–27


Juniper Networks Design—Data Center

MC-LAG Modes
There are two modes in which an MC-LAG can operate: Active/Standby and Active/Active. Each state type has its own set of
benefits and drawbacks.
Active/Standby mode allows only one MC-LAG peer to be active at a time. Using LACP, the active MC-LAG peer signals to the
attached device (the server in our illustrated example) that its links are available to forward traffic. As you might guess, a
drawback to this method is that only half of the links in the server’s LAG are used at any given time. However, this method is
usually easier to troubleshoot than Active/Active because traffic is not hashed across all links and no shared MAC learning
needs to take place between the MC-LAG peers.
Using the Active/Active mode, all links between the attached device (the server in our illustrated example) and the MC-LAG
peers are active and available for forwarding traffic. Because all links are active, traffic has the potential need to go between
the MC-LAG peers. The ICL-PL can be used to accommodate the traffic required to pass between the MC-LAG peers. We
demonstrate this on the next slide. Currently, the QFX5100 Series switches only support the Active/Active mode and this
mode is the preferred deployment mode for most data center deployments.

Chapter 4–28 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

Traffic Flow Example: Active/Active


This slide illustrates the available forwarding paths for an MC-LAG deployed in the Active/Active mode. In the Active/Active
mode, the ICL-PL trunk port can be used to forward traffic for all VLANs to which it is assigned. This is especially helpful when
one link in the MC-LAG fails and the traffic’s destination is reachable through the peer with the failed link. In this failure
scenario, the traffic is forwarded through the surviving link to the other MC-LAG peer and then over the ICL-PL connection
and on to its intended destination.
To ensure proper forwarding with a functional MC-LAG deployment, you should be aware of some Spanning Tree Protocol
(STP) guidelines exist when deploying MC-LAG. It is recommended that you enable STP globally on the MC-LAG peers to avoid
local mis-wiring loops within a single peer or between both peers. It is also recommended that you disable STP on the ICL-PL
link; otherwise, it might block ICL-PL ports and disable protection. When an MC-LAG is deployed between switches in a tiered
environment; for example between the access and distribution layers or between the distribution and core layers, STP should
be disabled on the MC-AE interfaces participating in the MC-LAG. In the situation shown on the slide where the MC-LAG is
defined on switches in the access layer, you should mark the MC-AE interfaces as edge interfaces and ensure any received
STP BPDUs received from the connected device are blocked. Blocking BPDUs on edge ports will help ensure there are no
unwanted STP topology changes in your environment.
Note
A loop-free state is maintained between MC-LAG peers through
ICCP and through the block filters maintained on each peer. The
ICCP messages exchanged between MC-LAG peers includes state
information and the block filters determine how traffic is forwarded.

www.juniper.net Traditional Data Center Architecture • Chapter 4–29


Juniper Networks Design—Data Center

Layer 2 Unicast: MAC Learning and Aging


When a MAC address is learned on a single-homed connection on one of the MC-LAG peers, as shown in the example, that
MAC address is propagated to the other MC-LAG peer using ICCP. The remote peer receiving the MAC address through ICCP,
Switch-2 in our example, adds a new MAC entry in its bridge table associated with the corresponding VLAN and associates
the newly added forwarding entry with the ICL-PL link.
All learned MAC addresses, regardless if they are learned locally or through ICCP from the remote peer, are removed from the
bridge table when the aging timer expires on both peers. When traffic is seen by either one of the MC-LAG peers from a
known MAC address, the aging timer resets.
When all MC-LAG interfaces are up and operational, both peers forward packets received through interfaces other than the
ICL-PL and destined to the devices attached through the MC-LAG interfaces. This is known as local affinity and is the
preferred forwarding method. If the MC-LAG interfaces are down on one of the local peer, packets received through
interfaces other than the ICL-PL are redirected to ICL-PL towards the remote MC-LAG peer, which in turn forwards those
packets out its local MC-LAG interface towards the attached device.

Chapter 4–30 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

Layer 2 Multicast Support


By default, when multicast traffic is received on an MC-LAG peer (or any Layer 2 switch), it floods that traffic out all interfaces
associated with the VLAN in which the traffic was received. This default behavior is not ideal in most environments because
of the unnecessary resource consumption that occurs. To avoid unnecessary resource consumption, you can enable Internet
Group Membership Protocol (IGMP) snooping. Note that in an MC-LAG deployment the multicast traffic is always flooded over
the ICL-PL connection.
IGMP snooping controls multicast traffic in a switched network. As previously mentioned, when IGMP snooping is not
enabled, a switch floods multicast traffic out all ports assigned to the associated VLAN, even if the hosts on the network do
not want the multicast traffic. With IGMP snooping enabled, a switch monitors the IGMP join and leave messages sent by a
switch’s attached hosts which are destined to a multicast router. This enables the switch to keep track of multicast groups
for which it has interested receivers and the ports assigned to the interested receivers. The switch then uses this information
to make intelligent decisions and to forward multicast traffic to only the interested destination hosts.
In an MC-LAG configuration, IGMP snooping replicates the Layer 2 multicast routes so that each MC-LAG peer has the same
routes. If a device is connected to an MC-LAG peer by way of a single-homed interface, IGMP snooping does not replicate the
join message to its IGMP snooping peer.

www.juniper.net Traditional Data Center Architecture • Chapter 4–31


Juniper Networks Design—Data Center

Layer 3 Routing
Layer 3 inter-VLAN routing can be provided through MC-LAG peers using integrated routing and bridging (IRB) and VRRP. This
allows compute devices to communicate with other devices on different Layer 3 subnets using gateway access through their
first-hop infrastructure device (their directly attached access switch), which can expedite the required communication
process.

Chapter 4–32 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

We Discussed:
• Traditional multitier data center architectures;
• Using link aggregation and redundant trunk groups; and
• Using multichassis link aggregation.

www.juniper.net Traditional Data Center Architecture • Chapter 4–33


Juniper Networks Design—Data Center

Review Questions
1.

2.

3.

Chapter 4–34 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

Lab: Designing a Multitier Architecture


The slide provides the objective for this lab.

www.juniper.net Traditional Data Center Architecture • Chapter 4–35


Juniper Networks Design—Data Center
Answers to Review Questions
1.
LAG, RTG and MC-LAG can be used for server high availability when connecting to your access layer.
2.
In an active-active MC-LAG deployment traffic is distributed between the LAG interfaces. When using RTG, one link is used while the
other link operates in a standby capacity with all traffic traversing only the active link.
3.
MC-LAG allows you to implement an active-active scenario which will distribute traffic across all links in the MC-LAG bundle. This
behavior allows you to utilize all links and bandwidth more efficiently.

Chapter 4–36 • Traditional Data Center Architecture www.juniper.net


Juniper Networks Design—Data Center

Chapter 5: Ethernet Fabric Architectures


Juniper Networks Design—Data Center

We Will Discuss:
• Key concepts and components of a Virtual Chassis;
• Key concepts and components of a Virtual Chassis Fabric (VCF);
• Key concepts and components of a QFabric system; and
• Key concepts and components of Junos Fusion.

Chapter 5–2 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Ethernet Fabric Overview


The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–3


Juniper Networks Design—Data Center

The New Data Center


The slide describes some of the requirements of the next generation data centers.

Chapter 5–4 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Ethernet Fabric Solution


Juniper Networks has four Ethernet fabric solutions including:
1. Virtual Chassis;
2. Virtual Chassis Fabric (VCF);
3. QFabric; and
4. Junos Fusion.
Each of these solutions allow for multiple devices (up to 128 nodes in some cases) to be joined together to form a single
logical switch. Each solution requires the administrator to use a single CLI session to manage all of the devices that form the
logical switch. Also, each solution has its own methodology to prevent Layer 2 loops while also minimizing and even
eliminating the need for spanning tree protocols in the data center.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–5


Juniper Networks Design—Data Center

Juniper Network Solutions


The slide shows a comparison of the various Juniper Networks data center solutions.

Chapter 5–6 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Virtual Chassis
The slide highlights the topic we discuss next.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–7


Juniper Networks Design—Data Center

Virtual Chassis Defined


You can connect two or more switches together to form one unit and manage the unit as a single chassis, called a Virtual
Chassis. The Virtual Chassis system offers you add-as-you-grow flexibility. A Virtual Chassis can start with two switches and
grow, based on your needs, to as many as ten interconnected switches. This ability to grow and expand within and across
racks is a key advantage in many data center environments.We discuss additional benefits and design and operational
considerations on subsequent slides in this chapter.

Valid Chassis Combinations


A few different switch models can be used to create a Virtual Chassis. In many environments, this allows administrators to
collapse the access and aggregation layers into a single layer making the network more efficient and less complicated to
manage.

Chapter 5–8 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

A Mix of Switch Types

A mixed Virtual Chassis is a Virtual Chassis consisting of a mix of different switch types. Only certain mixtures of switches are
supported. It can consist either of EX4200, EX4500, or EX4550 Series switches or it can consist of EX4300, QFX3500,
QFX3600, or QFX5100 Series switches.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–9


Juniper Networks Design—Data Center

Control Plane Redundancy


In a Virtual Chassis configuration, one of the member switches is elected as the master Routing Engine (RE) and a second
member switch is elected as the backup RE. This design approach provides control plane redundancy and is a requirement
in many enterprise environments.
Having redundant REs enables you to implement nonstop active routing (NSR), and nonstop bridging (NSB) which allows for
a transparent switchover between REs without requiring restart of supported routing protocols and supported Layer 2
protocols respectively. Both REs are fully active in processing protocol sessions, so each can take over for the other.
You can connect certain switches together to form a Virtual Chassis system, which you then manage as a single device.
Comparatively speaking, managing a Virtual Chassis system is much simpler than managing up to ten individual switches.
For example, when upgrading the software on a Virtual Chassis system, only the master switch must have the software
upgraded. However, if all members function as standalone switches, all individual members must have the software
upgraded separately. Also, in a Virtual Chassis scenario, it is not necessary to run the Spanning Tree Protocol (STP) between
the individual members because in all functional aspects, a Virtual Chassis system is a single device.

Chapter 5–10 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Virtual Chassis Components


You can interconnect one to ten switches to form a Virtual Chassis. Each switch has one to three Packet Forwarding Engines
(PFEs) depending on the platform. All PFEs are interconnected, either through internal connections or through the Virtual
Chassis ports (VCPs). Collectively, the PFEs and their connections constitute the Virtual Chassis backplane.
You can use the built-in QSFP+ VCPs on the rear of the EX4300 switches (or the dedicated VCPs on other chassis) or 10 GbE
uplink ports, converted to VCPs, to interconnect the member switches’ PFEs. To use an uplink port as a VCP, explicit
configuration is required.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–11


Juniper Networks Design—Data Center

Virtual Chassis Cabling Options: Part 1


This slide illustrates one of the recommended cabling options and provides some related information. The actual cabling
distances are dependent on the cable or optic type and capabilities. Please refer to the latest documentation for your
specific platform to determine actual maximum distances.

Chapter 5–12 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Virtual Chassis Cabling Options: Part 2


This slide illustrates another recommended cabling option.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–13


Juniper Networks Design—Data Center

Virtual Chassis Cabling Options: Part 3


This slide illustrates another recommended cabling option.

Chapter 5–14 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Member ID Assignment and Considerations


The master switch typically assumes a member ID of 0 because it is the first switch powered on. Member IDs can be
assigned manually using the preprovisioned configuration method or dynamically from the master switch.
If assigned dynamically, the master switch assigns each member added to the Virtual Chassis a member ID from 1 through
9, making the complete member ID range 0–9. The master assigns each switch a member ID based the sequence that the
switch was added to the Virtual Chassis system. The member ID associated with each member switch is preserved, for the
sake of consistency, across reboots. This preservation is helpful because the member ID is also a key reference point when
naming individual interfaces. The member ID serves the same purpose as a slot number when configuring interfaces.
Continued on the next page.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–15


Juniper Networks Design—Data Center
Member ID Assignment and Considerations (contd.)
Note that when the member ID is assigned by the master switch, you can change the assigned ID values using the CLI. The
LCD and CLI prompt displays the member ID and role assigned to that switch. The following sequence shows an example of
the CLI prompt and how to change the member ID:
{master:0}
user@Switch-1> request virtual-chassis renumber member-id 0 new-member-id 5

To move configuration specific to member ID 0 to member ID 5, please


use the replace command. e.g. replace pattern ge-0/ with ge-5/

Do you want to continue ? [yes,no] (no) yes

{master:0}
user@Switch-1>
Switch-1 (ttyu0)

login: user
Password:

--- JUNOS 13.2X51-D20.3 built 2014-05-02 00:43:08 UTC


{master:5}
user@Switch-1>
If you must shutdown a specific member of a Virtual Chassis, you can use the request system halt member
command as shown in the following example:
{master:5}
user@Switch-1> request system halt member ?
Possible completions:
<member> Halt specific virtual chassis member (0..9)
If needed, you can access individual members of a Virtual Chassis system using the request session operational
command as shown in the following example:

{master:5}
user@Switch-1> request session member 1

--- JUNOS 13.2X51-D20.3 built 2014-05-02 00:43:08 UTC


{backup:1}
user@Switch-1>

Chapter 5–16 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Think About It!

This slide provides an opportunity to discuss and think about how interfaces are named within a Virtual Chassis.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–17


Juniper Networks Design—Data Center

Management Connectivity: Part 1


The management Ethernet ports on the individual member switches are automatically associated with a management VLAN.
This management VLAN uses a Layer 3 virtual management interface called vme, which facilitates communication through
the Virtual Chassis system to the master switch even if the master switch’s physical Ethernet port designated for
management traffic is inaccessible.
When you set up the master switch, you specify an IP address for the virtual management Ethernet (vme) interface. This
single IP address allows you to configure and monitor the Virtual Chassis system remotely through Telnet or SSH regardless
of which physical interface the communications session uses.

Chapter 5–18 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Management Connectivity: Part 2


All member switches participating in a Virtual Chassis system run virtual console software. This software redirects all
console connections to the master switch regardless of the physical console port through which the communications session
is initiated.
The ability to redirect management connections to the master switch simplifies Virtual Chassis management tasks and
creates a level of redundancy. Generally speaking, you can obtain all status-related information for the individual switches
participating in a Virtual Chassis system through the master switch. It is, however, possible to establish individual virtual
terminal (vty) connections from the master switch to individual member switches.
If needed, you can access individual members of a Virtual Chassis system using the request session operational
command as shown in the following example:
{master:5}
user@Switch-1> request session member 1

--- JUNOS 13.2X51-D20.3 built 2014-05-02 00:43:08 UTC


{backup:1}
user@Switch-1>

www.juniper.net Ethernet Fabric Architectures • Chapter 5–19


Juniper Networks Design—Data Center

Software Upgrades
You perform software upgrades within a Virtual Chassis system on the master switch. Using the request system
software add command, all member switches are automatically upgraded. Alternatively, you can add the member option
with the desired member ID, as shown on the slide, to upgrade a single member switch.

Software Compatibility Check


For a new member switch to be added to and participate in a Virtual Chassis system, that switch must be running the same
software version as the master switch. The master switch checks the Junos OS version on all newly added switches before
allowing them to participate with the Virtual Chassis system. If a software version mismatch exists, the Virtual Chassis
master will assign a member ID to the new switch, generate a syslog message and place the newly added switch in the
inactive state. Any member switch in this state must be upgraded before actively joining and participating with the Virtual
Chassis.
You can upgrade individual switches manually or you can enable the automatic software upgrade feature. The automatic
software update feature automatically updates software on prospective member switches as they are added to a Virtual
Chassis. This method allows new member switches to immediately join and participate with the Virtual Chassis. We discuss
this configuration option in a subsequent section.

Chapter 5–20 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Topology Discovery: Part 1


All switches participating in the Virtual Chassis system use the Virtual Chassis Control Protocol (VCCP) to discover the
system’s topology and ensure the topology is free of loops. Each member exchanges link-state advertisement (LSA) based
discovery messages between all interconnected PFEs within a Virtual Chassis system. Based on these LSA-based discovery
messages, each PFE builds a member switch topology in addition to a PFE topology map. These topology maps are used
when determining the best paths between individual PFEs.
Once the PFE topology map is built, the individual switches run a shortest-path algorithm for each PFE. This algorithm is
based on hop count and bandwidth. The result is a map table for each PFE that outlines the shortest path to all other PFEs
within the Virtual Chassis system. In the event of a failure, a new SPF calculation is performed.
To prevent broadcast and multicast loops, each switch creates a unique source ID egress filter table on each PFE.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–21


Juniper Networks Design—Data Center

Topology Discovery: Part 2


The slide illustrates the physical cabling and logical ring topology of a Virtual Chassis.

Chapter 5–22 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Topology Discovery: Part 3


Using the SPF algorithm, each PFE builds its own shortest-path tree to all other PFEs, based on hop count and bandwidth.
This process is automatic and is not configurable. The slide illustrates the basics of this process.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–23


Juniper Networks Design—Data Center

Inter-Chassis Packet Flow


As packets flow from one member switch to another through the Virtual Chassis system, they always takes the shortest path.
The shortest path within a Virtual Chassis is based on a combination of hop count and bandwidth. The first example on the
slide shows a packet that enters the Virtual Chassis through port ge-0/0/10, which is a fixed Gigabit Ethernet port on the
member switch assigned member ID 0. The packet is destined for the egress port ge-3/0/14, which is a fixed Gigabit
Ethernet port on the member switch assigned member ID 3. Based on the physical topology, this packet passes through
member switch 4 to member switch 3, which owns the egress port in this example.
In the second example, we see similar results in which the shortest path, which traverses the member switch assigned
member ID 1, is selected.

Chapter 5–24 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Packet Forwarding Engine Scaling


Regardless of what switch types are actually attached to the Virtual Chassis, setting the mode to mixed causes the master
kernel and daemons to set hardware scaling numbers to the lowest common denominators (LCD). The slide shows some of
the hardware capabilities that are affected.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–25


Juniper Networks Design—Data Center

Software Features
In general you can expect Junos to support the LCD in regards to software features in a mixed Virtual Chassis. The slide
shows the uniform resource locator (URL) that will allow you to view behavior of many of the software features when they are
used in a mixed scenario.

Chapter 5–26 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Virtual Chassis Use Case - Switch Expansion


The slide shows a typical use case for a Virtual Chassis. Suppose that there is a single QFX5100-48S (i.e., 48 10GbE ports)
acting as a TOR switch and every server-facing port is used. To support more than 48 server-facing ports, the customer might
think that they have to replace the QFX5100 with a bigger switch. Instead, you can recommend that they simply add a
second QFX5100 to the top of the rack and cable the two switches with VCPs creating a two member Virtual Chassis. To the
administrator of the switch, the VC will appear as a single switch with two FPCs.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–27


Juniper Networks Design—Data Center

Virtual Chassis Use Case - Single TOR per Row


The slide show another typical use case for a Virtual Chassis. Assuming the customer has rows and rows of access TOR
switches, the administrator must manage each of TOR individually and ensure a loop free topology. Using Juniper switches
(QFX5100s in this case), each of the TOR switches in a row could be cabled together using VCPs forming a single Virtual
Chassis. To the administrator of the switches on the slide, the VC will appear as a single switch with five FPCs and the VCCP
protocol will ensure a loop free topology.

Chapter 5–28 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Virtual Chassis Fabric


The slide highlights the topic we discuss next.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–29


Juniper Networks Design—Data Center

What Is a VCF?
The Juniper Networks VCF provides a low-latency, high-performance fabric architecture that can be managed as a single
device. VCF is an evolution of the Virtual Chassis feature, which enables you to interconnect multiple devices into a single
logical device, inside of a fabric architecture. The VCF architecture is optimized to support small and medium-sized data
centers that contain a mix of 1-Gbps, 10-Gbps, and 40-Gbps Ethernet interfaces.
A VCF is constructed using a spine-and-leaf architecture. In the spine-and-leaf architecture, each spine device is
interconnected to each leaf device. A VCF supports up to twenty total devices, and up to four devices can be configured as
spine devices. QFX5100 Series switches can be placed in either the Spine or Leaf location while QFX3500, QFX3600, and
EX4300 Series switches should only be wired as Leaf devices in a mixed scenario.

Chapter 5–30 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Three Stage Clos Fabric


In the 1950s, Charles Clos first wrote about his idea of a non-blocking, multistage, telephone switching architecture that
would allow calls to be completed. The switches in his topology are called crossbar switches. A Clos network is based on a
three-stage architecture, an ingress stage, a middle stage, and an egress stage. The theory is that there are multiple paths
for a call to be switched through the network such that calls will always be connected and not "blocked" by another call. The
term Clos “fabric” came about later as people began to notice that the pattern of links looked like threads in a woven piece
of cloth.

You should notice that the goal of the design is to provide connectivity from one ingress crossbar switch to an egress
crossbar switch. Notice that there is no need for connectivity between crossbar switches that belong to the same stage.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–31


Juniper Networks Design—Data Center

VCF is Based on a Clos Fabric


The diagram shows a VCF using Juniper Networks switches. In a VCF the Ingress and Egress stage crossbar switches are
called Leaf nodes. The middle stage crossbar switches are called Spine nodes. Most diagrams of a VCF do not present the
topology with 3 distinct stages as shown on this slide. Most diagrams show a VCF with the Ingress and Egress stage
combined as a single stage. It would be like taking the top of the diagram and folding it over onto itself with all Spines nodes
on top and all Leaf nodes on the bottom of the diagram (see the next slide).

Chapter 5–32 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Spine and Leaf Architecture: Part 1


To maximize the throughput of the fabric, each Leaf node should have a connection to each Spine node. This will allow for
each network-facing interface is always two hops away from any other network-facing interfaces. This creates a highly
resilient fabric with multiple paths to all other devices. An important fact to keep in mind is that a member switch has no idea
of its location (Spine or Leaf) in a VCF. The Spine or Leaf function is simply a matter of a device physical location in the fabric.
It is highly recommended and best practice to have QFX5100 Series devices (particularly the QFX5100-24q) in the Spine
position because they are designed to handle the throughput load of a fully populated VCF (32 nodes).

www.juniper.net Ethernet Fabric Architectures • Chapter 5–33


Juniper Networks Design—Data Center

Spine and Leaf Architecture: Part 2


The slide shows that there are four distinct paths (1 path per Spine node) between Host A and Host B across the fabric. In a
VCF, traffic is automatically load balanced over those four paths using a hash algorithm (keeps frames from same flow on
same path). This is unlike Juniper’s Virtual Chassis technology where only one path to the destination is ever chosen for
forwarding of data.

Chapter 5–34 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

VCF Benefits
The slide shows some of the benefits (similar to Virtual Chassis) of VCF when compared to managing 32 individual switches.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–35


Juniper Networks Design—Data Center

Benefits Compared to Virtual Chassis


The slide shows some of the benefits of using VCF over and above the benefits realized by Virtual Chassis.

Chapter 5–36 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

VCF Similarities to Virtual Chassis


The slide lists the major similarities between VCF and Virtual Chassis.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–37


Juniper Networks Design—Data Center

Differences Between VCF and Virtual Chassis


The slide list some of the major differences between VCF and Virtual Chassis.

Chapter 5–38 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

VCF Components
You can interconnect up to 20 QFX5100 Series switches to form a VCF. A VCF can consist of any combination of model
numbers within the QFX5100 family of switches. QFX3500, QFX3600, and EX4300 Series switches are also supported in the
line card role.
Each switch has a Packet Forwarding Engines (PFE). All PFEs are interconnected by Virtual Chassis ports (VCPs). Collectively,
the PFEs and their VCP connections constitute the VCF.
You can use the built-in 40GbE QSFP+ ports or SFP+ uplink ports, converted to VCPs, to interconnect the member switches’
PFEs. To use an uplink port as a VCP, explicit configuration is required.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–39


Juniper Networks Design—Data Center

Spine Nodes
To be able to support the maximum throughput, QFX5100 Series switches should be placed in the Spine positions. It is
further recommended to use the QFX5100-24q switch in the Spine position. Although, any QFX5100 Series switch will work
in the Spine position, it is the QFX5100-24q switch that supports 32 40GbE QSFP+ ports which allows for the maximum
expansion possibility (remember that 16 Leaf nodes would take up 16 QSFP+ ports on each Spine). Spines are typically
configured in the RE role (discussed later).

Chapter 5–40 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Leaf Nodes
Although not a requirement, it is recommended to use QFX5100 Series devices in the Leaf position. Using a non-QFX5100
Series switch (even just one different switch) requires that the entire VCF is placed into “mixed” mode. When a VCF is placed
into mixed mode, the hardware scaling numbers for the VCF as a whole (MAC table size, routing table size, and many more)
to be scaled down to the lowest common denominator between potential member switches. It is recommended that each
Leaf node has a VCP connection to every Spine node.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–41


Juniper Networks Design—Data Center

Member Roles
The slide shows the different Juniper switches that can participate in a VCF along with their recommended node type (Spine
or Leaf node) as well as their capability to become an RE or line card. It is always recommended to use QFX5100 Series
switches in the Spine position. All other supported switch types should be place in the Leaf position. In a VCF, only a
QFX5100 Series device can assume the RE role (even if you try to make another switch type an RE). Any supported switch
type can be assigned the linecard role.

Chapter 5–42 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Master RE
A VCF has two devices operating in the Routing Engine (RE) role—a master Routing Engine and a backup Routing Engine. All
Spine nodes should be configured for the RE role. However, based on the RE election process only two REs will be elected.
Any QFX5100 Series that is configured as an RE but is not elected to the master or backup RE role will take on the linecard
role.
A QFX5100 Series configured for the RE role but operating in the linecard role can complete all leaf or spine related
functions with no limitations within a VCF.
The device that functions as the master Routing Engine:
• Should be a spine device (a “must” for Juniper support).
• Manages the member devices.
• Runs the chassis management processes and control protocols.
• Represents all the member devices interconnected within the VCF configuration. (The hostname and other
parameters that you assign to this device during setup apply to all members of the VCF.)

www.juniper.net Ethernet Fabric Architectures • Chapter 5–43


Juniper Networks Design—Data Center

Backup RE
The device that functions as the backup Routing Engine:
• Should be a spine device (a “must” for Juniper support).
• Maintains a state of readiness to take over the master role if the master fails.
• Synchronizes with the master in terms of protocol states, forwarding tables, and so forth, so that it preserves
routing information and maintains network connectivity without disruption when the master is unavailable.

Chapter 5–44 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Linecard Role
The slide describes the functions of the linecard in a VCF.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–45


Juniper Networks Design—Data Center

Spine Node as a Linecard

If more than two devices are configured into the RE role, not every one of those devices will actually take on the RE role.
Instead, two REs (master and backup) will be elected and any other device will be placed into the line card role. The slide
describes the behavior of a Spine node that has been configured for the RE role but has actually taken on the linecard role.

Chapter 5–46 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Required for Obtaining Juniper Support


All though it is possible to not follow the best practices as listed on this slide. There are some best practices that are also
required to be in place in order to receive support from JTAC in regards to your VCF. The best practices that are also required
for Juniper support include the following:
• All Spine nodes must be QFX5100 Series switches;
• The RE role must only be assigned to Spine nodes; and
• All Leaf nodes must be configured for the linecard role.

Other Best Practices—Not Required for Juniper Support


The other best practices listed on the slide are highly recommended as part of the design of your VCF.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–47


Juniper Networks Design—Data Center

Integrated Control Plane


A VCF is controlled by a single switch that is elected as the master RE. One switch will be elected to the backup RE role. You
can also configure other switches to take on the backup RE role in case of a master RE failure. All control plane and
forwarding plane data between members traverse the VCPs. To learn the topology as well as ensure a loop free forwarding
plane, the member switch run the Virtual Chassis Control Protocol (VCCP) between each other. You can think of it to a
modified version of the Intermediate System-to-Intermediate System (IS-IS) protocol. VCCP allows for each switch to calculate
a shortest path forwarding path for unicast data as well as form bidirectional multicast distribution trees (MDTs) for the
forwarding of broadcast, unknown unicast, and multicast (BUM) data. Also, by default there are special forwarding classes
(queues) enabled specifically for forwarding VCCP data within the fabric.

Chapter 5–48 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Topology Discovery: Part 1


All switches participating in the VCF system use the VCCP to discover the system’s topology and ensure the topology is free of
loops. Each member exchanges link-state advertisement (LSA) based discovery messages between all interconnected PFEs
within a VCF system. Based on these LSA-based discovery messages, each PFE builds a member switch topology in addition
to a PFE topology map. These topology maps are used when determining the best paths between individual PFEs.
Once the PFE topology map is built, the individual switches run a shortest-path algorithm for each PFE. This algorithm is
based on hop count and bandwidth. The result is a map table for each PFE that outlines all of the paths to all other PFEs
within the Virtual Chassis system. In the event of a failure, a new SPF calculation is performed.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–49


Juniper Networks Design—Data Center

Topology Discovery: Part 2


The slide illustrates the physical cabling and logical topology of a five-member VCF.

Chapter 5–50 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Topology Discovery: Part 3


Using a modified SPF algorithm, each PFE builds its own loop-free, multipath tree to all other PFEs, based on hop count and
bandwidth. This process is automatic and is not configurable. The slide illustrates the basics of this process.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–51


Juniper Networks Design—Data Center

Smart Trunks
There are several types of trunks that you will find in a VCF.
1. Automatic Fabric Trunks - When there are two VCPs between members (2x40G between member 4 and 0) they
are automatically aggregated together to form a single logical connection using Link Aggregation Groups (LAGs).
2. Next-Hop Trunks (NH-Trunks) - These are directly attached VCP between the local member and any other
member. In the slide, NHT1, NHT2, NHT3, and NHT4 are the NH-trunks for member 4.
3. Remote Destination Trunks (RD-Trunks) - These are the multiple, calculated paths between one member and a
remote member. These are discussed on the next slide.

Chapter 5–52 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

RD-Trunks: Part 1
The slide shows how member 4 is able to determine (using what it learns in the VCCP LSAs) multiple paths to a remote
member (member 15 in the example). In this example, each link between Leaf and Spine is 40G. Because this VCF was
designed using best practices (similar links between all Leaf and Spine nodes), traffic from member 4 to member 15 can be
evenly distributed over the 4 equal cost RD-trunks. The following slides shows what happen when best practices are not
followed.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–53


Juniper Networks Design—Data Center

RD-Trunks: Part 2

The slide shows how member 4 is able to determine (using what it learns in the VCCP LSAs) multiple paths to a remote
member (member 15 in the example). The paths do not need to be equal cost paths. All links between members are
40 Gbps except for the link between member 4 and 0 (80 Gbps) and the link between member 3 and 15 (10 Gbps). Based
on the minimum bandwidth of the path, member 4 will assign a weight to each path. This is shown in the next slide.

Chapter 5–54 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Weight Calculation Example


The slide shows a sample (not exactly) of how the weight calculation is performed. First notice the 4->0->15 path. Even
though the NHT1 is 80 Gbps, the minimum bandwidth along the path is 40 Gbps because the link between member 0 and
15 is 40 Gbps. The 4->1->15 and 4->2->15 path each have a minimum bandwidth along the path of 40 Gbps because all
links are 40 Gbps. The minimum bandwidth along the 4->3->15 path is 10 Gbps because the link between member 3 and
15 is 10 Gbps. To determine the weight to assign to the RD trunk, member 4 simply calculates the total minimum bandwidth
and then divides it into the minimum bandwidth for each RD-Trunk. As you can see, an equal amount of traffic will be sent
along RD-Trunks 1, 2, and 3 while RD-Trunk 4 will receive a considerably less amount of traffic. This behavior helps to ensure
that the 0->15 and 3->15 links are not over-saturated with traffic.
It should be notes that the 40 Gbps of bandwidth will go unused along the path of NHT1. This wasted bandwidth could have
been avoided had the designer stuck to VCF best practices (similar links between all Leaf and Spine nodes).

www.juniper.net Ethernet Fabric Architectures • Chapter 5–55


Juniper Networks Design—Data Center

Fabric Header
A HiGig fabric header is used to pass frames over VCPs. In the case of layer 2 switching, when an Ethernet frame arrives, the
inbound member will perform an Ethernet switching table lookup (based on MAC address) to determine the destination
member and port. After that, the inbound member encapsulates the incoming frame in the fabric header. The fabric header
specifies the destination member and port (among other things). All members along the path will forward the encapsulated
frame by performing lookups on the fabric header only. Once the frame reaches the destination member, the fabric header is
removed and the Ethernet frame is sent out of the destination port without a second MAC table lookup.

Chapter 5–56 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Default Load Balancing


The slide lists two sets of inputs to the hash that are used to load balance traffic over RD-trunks. There are also inputs for
IPv6 packets which includes Next Header, Source and Destination Ports, and Source and Destination IP address. These
inputs to the hash algorithm are hard coded in the PFE and cannot be modified.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–57


Juniper Networks Design—Data Center

Adaptive Load Balancing


Using the standard load balancing setting on a VCF can cause problems when there are elephant flows in the network. Using
Ethernet frames as an example, an elephant flow would occur when a high percentage of frames arrive that have the same
Source and Destination MAC, Ethertype, VLAN ID, Incoming Port ID, and Incoming member ID. Based on the default load
balancing algorithm, every one of the frames from the elephant flow would be forwarded over the same RD-Trunk. Essentially
the elephant flow could block that trunk for smaller flows (mice flows) that hash to that same trunk. The adaptive load
balancing feature, or flowlet splicing, was created to solve this problem. Once enabled through configuration on the VCF,
member 4 will start tracking the inactivity time between received frames for all flows. By default, if the inactivity time is 16us
between two frames of a flow, the next frame will be forward over a different trunk. It is possible to set a different value for
the inactivity timer. However, the timer threshold should be set to be the maximum latency skew across the VCF fabric. This
way, when a packet is assigned to a different path inside the fabric, it is guaranteed that there will not be any packet
reordering, within its flow, external to the fabric.

Chapter 5–58 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

VCF Use Case


The slide shows the typical use case for a VCF. Notice that the TOR access switches have been replaced with VCF leaf nodes
that help form part of a single, logical switch. If a customer already has the TOR switch models listed on the slides those
switches can simply be added to the VCF. The uplink from a VCF is typically a set of LAG connection to the Edge/Aggregation
layer (the EX9200s in the slide).

www.juniper.net Ethernet Fabric Architectures • Chapter 5–59


Juniper Networks Design—Data Center

QFabric
The slide highlights the topic we discuss next.

Chapter 5–60 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Addressing the Challenges


The Juniper Networks QFabric system offers a solution to many of the challenges found in legacy data center environments.
The QFabric system collapses the various tiers found in legacy data center environments into a single tier. In the QFabric
system, all Access Layer devices connect to all other Access Layer devices across a very large scale fabric backplane. This
architecture enables the consolidation of data center endpoints and provides better scaling and network virtualization
capabilities than traditional data centers.
The QFabric system functions as a single, nonblocking, low-latency switch that can support up to thousands of 10-Gigabit
Ethernet ports or 2-Gbps, 4-Gbps, or 8-Gbps Fibre Channel ports to interconnect servers, storage, and the Internet across a
high-speed, high-performance fabric. The system is managed as a single entity. The control and management element of the
system automatically senses when components are added or removed from the system and dynamically adjusts the amount
of processing resources required to support the system. This intelligence helps the system use the minimum amount of
power to run the system efficiently.
The architecture of the system is flat, nonblocking, and lossless, which allows the network fabric to offer the scale and
flexibility required by small, medium, and large-sized data centers.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–61


Juniper Networks Design—Data Center

System Components
The QFabric system comprises four distinct components. These components are illustrated on the slide and briefly described
as follows:
• Node devices: The linecard component of a QFabric system, Node devices act as the entry and exit point into
and from the fabric.
• Interconnect devices: The fabric component of a QFabric system, Interconnect devices interconnect and provide
high-speed transport for the attached Node devices.
• Director devices: The primary Routing Engine component of a QFabric system, Director devices provide control
and management services for the system and deliver the primary user interface that allows you to manage all
components as a single device.
• EX Series switches: The control plane link of a QFabric system, EX Series switches provide the required
connectivity between all other system components and facilitate the required control and management
communications within the system.

Chapter 5–62 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Node Devices
Node devices connect endpoints (such as servers or storage devices) or external networks to the QFabric system. Node
devices have redundant connections to the system's fabric through Interconnect devices. Node devices are often
implemented in a manner similar to how top-of-rack switches are implemented in legacy multitier data center environments.
By default, Node devices connect to servers or storage devices. However, you can use Node devices to connect to external
networks by adding them to the network Node group.
The QFX3500 and QFX3600 switches can be used as Node devices within a QFabric system. We provide system details for
these devices on subsequent slides.
By default, the QFX3500 and QFX3600 switches function as standalone switches or node devices depending on the how the
device is ordered. However, through explicit configuration, you can change the operation mode between standalone to fabric.
We provide the conversion process used to change the operation mode from standalone to fabric on a subsequent slide in
this content.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–63


Juniper Networks Design—Data Center

QFX3500 Node Devices


The slide provides a detailed illustration of the QFX3500 Node device with some key information. As a node device, the
QFX3500’s four 40 GbE interfaces are dedicated uplink interfaces. However, you can use only two of the uplink ports, if
desired, resulting in a 6:1 oversubscription ratio when fully provisioned. Note that ports 0-5 and 42-47 have optional support
for Fibre Channel and are incompatible with 1 GbE. This means, when used as non-Fibre Channel ports, only 10 GbE
connections are supported.

Chapter 5–64 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

QFX3600 Node Devices


The slide provides a detailed illustration of the QFX3600 Node device with some key information. By default, the first four
Quad Small Form-factor Pluggable Plus (QSFP+) 40Gb ports function as fabric uplink ports and the remaining 12 QSFP+
ports function as access ports. The default port assignments can be modified through configuration to designate as few as
two uplink ports and as many as eight uplink ports. Using two uplink ports results in a 7:1 oversubscription ratio when fully
provisioned. The revenue QSFP+ ports function, by default, as 4 x 10Gb ports using the breakout cables. These ports can
alternatively be configured as individual 40 GbE ports using the syntax shown below:
[edit]
root@qfabric# set chassis node-group name node-device name pic 1 ?
Possible completions:
+ apply-groups Groups from which to inherit configuration data
+ apply-groups-except Don't inherit configuration data from these groups
> fte 1x40G fte mode port configuration option
> xe 4x10G xe mode port configuration option
> xle 1x40G xle mode port configuration option
Note that while the physical structure and components of the QFX3600 Node device and the QFX3600-I Interconnect device
are the same, their roles are quite different. These devices have distinct part numbers and come preprovisioned for their
designated roles. There is, however, a process to convert a QFX3600 Node device to an Interconnect device and vice versa.
We cover that process later in this content.
You can convert a Node device to an Interconnect device using the request chassis device-mode
interconnect-device operational mode command followed by a system reboot. Before converting the system’s device
mode it is highly recommended that you back up the configuration.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–65


Juniper Networks Design—Data Center

QFX5100 Node Devices


The slide provides a detailed illustration of the QFX5100 Node devices with some key information. As shown on the slide
there are multiple QFX5100 models that can serve as Node devices in a QFabric System. As of this writing, the QFX5100-48S
and QFX5100-48T models in their various configurations can serve as Node devices. The QFX5100-24Q model is planned to
function as a Node device in a later software release. All QFX5100 models include redundant fan trays and power supplies.
The different models support two air flow design options, either out the rear of the device or in the rear of the device. In
addition to redundant fan trays and power supplies, each model comes with a designated number of revenue ports, either
10 Gb or 40 Gb ports depending on the model. The 40 Gb ports will either be used as fabric uplink ports, channelized 4 x 10
Gb revenue ports using the appropriate breakout cables, or 40 Gb revenue ports.
In addition to the QFX5100 models previously referenced, another model exists that is known as the QFX5100-96S. There
are currently no plans to allow the QFX5100-96S to serve as a Node device in a QFabric System. The QFX5100-96S is a 10
Gb switch that provides 96 10 Gb revenue ports along with eight quad small form-factor pluggable plus (QSFP+) 40 Gb ports.
The QFX5100 switches are a foundational component for multiple fabric architectures including Juniper’s mixed 1/10/
40GbE Virtual Chassis, Virtual Chassis Fabric, and QFabric architectures. The QFX5100 switches also support several open
architectures such as Spine and Leaf and Layer 3 fabrics. For more information about the QFX5100 switches, check their
respective datasheets at https://www.juniper.net/us/en/products-services/switching/qfx-series/qfx5100/.
Note
The software version required for a given QFX5100 model to function
as a Node device varies. Please check the technical publications for
your software version and device model for support details.

Chapter 5–66 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Interconnect Devices
Interconnect devices serve as the fabric between all Node devices within a QFabric system. Two or more Interconnect
devices are used in QFabric systems to provide redundant connections for all Node devices. Each Node device has at least
one fabric connection to each Interconnect device in the system. Data traffic sent through the system and between remote
Node devices must traverse the Interconnect devices, thus making this component a critical part of the data plane network.
We discuss the data plane connectivity details on a subsequent slide in this content.
The two Interconnect devices available are the QFX3008-I and the QFX3600-I Interconnect devices. The model deployed will
depend on the size and goals of the implementation. We provide system details for these devices and some deployment
examples on subsequent slides in this content.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–67


Juniper Networks Design—Data Center

QFX3008-I Interconnect Devices


The slide provides a detailed illustration of the QFX3008-I Interconnect device with some key information.

Chapter 5–68 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

QFX3600-I Interconnect Devices


The slide provides a detailed illustration of the QFX3600-I Interconnect device with some key information.
Note that while the physical structure and components of the QFX3600 Node device and the QFX3600-I Interconnect device
are the same, their roles are quite different. These devices have distinct part numbers and come preprovisioned for their
designated roles.
You can convert an Interconnect device to a Node device using the request chassis device-mode node-device
operational mode command followed by a system reboot. Before converting the system’s device mode it is highly
recommended that you back up the configuration.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–69


Juniper Networks Design—Data Center

Director Devices
Together, two Director devices form a Director group. The Director group is the management platform that establishes,
monitors, and maintains all components in the QFabric system. The Director devices run the Junos operating system (Junos
OS) on top of a CentOS foundation.
These devices are internally assigned the names DG0 and DG1. The assigned name is determined by the order in which the
device is deployed. DG0 is assigned to the first Director device brought up and DG1 is assigned to the second Director device
brought up. The Director group handles tasks such as network topology discovery, Node and Interconnect device
configuration and startup, and system provisioning services.

Chapter 5–70 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

QFX3100 Director Devices


This slide provides a detailed illustration of the QFX3100 Director device with some key information. Note that with the
exception of the Interface Modules, all other redundant hardware components on the QFX3100 Director devices are hot
swappable.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–71


Juniper Networks Design—Data Center

EX4200 Switches—Control Plane Network Infrastructure


The EX4200 Ethernet switches support the control plane network, which is a Gigabit Ethernet management network used to
connect all components within a QFabric system. This control plane network facilitates the required communications
between all system devices. By keeping the control plane network separate from the data plane, the QFabric switch can
scale to support thousands of servers and storage devices. We discuss the control plane network and the data plane
network in more detail on subsequent slides in this content.
The model of switch, number of switches, and the configuration associated with these switches depends on the actual
deployment of the QFabric system. In small deployments, two standalone EX4200 switches are required, whereas in
medium to large deployments, eight EX4200 Switches configured as two Virtual Chassis with four members each is
required. Regardless of the deployment scenario, the 1 Gb ports are designated for the various devices in the system and
the uplink ports are used to interconnect the standalone switches or the Virtual Chassis.
We discuss the port assignments and the required configuration for the Virtual Chassis and standalone EX4200 Series
switches in the next section of this content.

Chapter 5–72 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Designed for High Availability


The QFabric system is designed for high availability. The individual components and the overall hardware and software
architectures of the system include redundancy to ensure a high level of operational uptime.
In addition to the redundant hardware components at the device level, the architectural design of the control and data
planes also includes many important qualities that support a high level of system uptime. One key consideration and
implementation reality is the separation of the control plane and data plane. This design, of course, is an important design
goal for all devices that run the Junos OS. We cover the implementation details of the control and data planes later in this
content.
Likewise, the system's software architecture maintains high availability by using resilient fabric provisioning and fabric
management protocols to establish and maintain the QFabric system.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–73


Juniper Networks Design—Data Center

QFX3000-G Deployment Example


Large QFabric system deployments include four QFX3008-I Interconnect devices and up to 128 Node devices, which offers
up to 6,144 10Gb Ethernet ports. Note that each Node device has a 40Gb uplink connection to each Interconnect device,
thus providing redundant paths through the fabric.
The control plane Ethernet network for large deployments includes two Virtual Chassis, consisting of four EX4200 Series
switches each. Although not shown in this illustration, each component in the system has multiple connections to the control
plane network thus ensuring fault tolerance.

Chapter 5–74 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

QFX3000-M Deployment Example


Small QFabric system deployments include four QFX3600-I Interconnect devices and up to 16 Node devices, which offers up
to 768 10Gb Ethernet ports. Note that each Node device has a 40Gb uplink connection to each Interconnect device, thus
providing redundant paths through the fabric.
The control plane Ethernet network for small deployments includes two EX4200 Series switches. Although not shown in this
illustration, each component in the system has multiple connections to the control plane network thus ensuring fault
tolerance.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–75


Juniper Networks Design—Data Center

A Design Option
In some data center environments the QFX3000-M QFabric System may be too small while the QFX3000-G QFabric System
may be too large. In such environments, you could, as shown on the slide, interconnect multiple QFX3000-M QFabric
Systems using a pair of core switches such as the EX9200 Series switches. For detailed information on this design refer to
the associated guide found at: http://www.juniper.net/us/en/local/pdf/reference-architectures/8030012-en.pdf.

Chapter 5–76 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Node Groups
The slide provides a brief explanation of the Node group software abstraction along with some other key details that relate to
Node groups including the types of Node groups and the default Node group association for Node devices. We expand on
these points throughout this section.

Note
Node groups are also referred to as independent network
elements (INEs). The INE reference might show up in
some debug and log outputs.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–77


Juniper Networks Design—Data Center

Server Node Groups


A server Node group is a single Node device functioning as a logical edge entity within the QFabric system. Server Node
groups connect server and storage endpoints to the QFabric system. As previously mentioned all Node devices boot up as a
server Node group by default.
As mentioned on the slide, server Node groups run only host-facing protocols such as Link Aggregation Control Protocol
(LACP), Link Layer Discovery Protocol (LLDP), Address Resolution Protocol (ARP), and Data Center Bridging Capability
Exchange (DCBX).
Members of a link aggregation group (LAG) from a server are connected to the SNG to provide a redundant connection
between the server and the QFabric system. In use cases where redundancy is built into the software application running on
the server (for example, many Software as a Service (SaaS) applications), there is no need for cross Node device
redundancy. In those cases, a server Node group configuration is sufficient.
The Node device associated with a given server Node group is responsible for local Routing Engine (RE) and Packet
Forwarding Engine (PFE) functions. The Node device uses its local CPU to perform these functions.

Note
A Server Node group is sometimes referred to as a top-of-rack (ToR).

Chapter 5–78 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Redundant Server Node Groups


A redundant server Node group consists of a pair of Node devices that represent a single logical edge entity in a QFabric
system. Similar to server Node groups, redundant server Node groups connect server and storage endpoints to the QFabric
system. For Node devices to participate in a redundant server Node group, explicit configuration is required. To create a
redundant server Node group, you simply assign two Node devices to a server Node group.
Like server Node groups, redundant server Node groups run only host-facing protocols such as Link Aggregation Control
Protocol (LACP), Link Layer Discovery Protocol (LLDP), Address Resolution Protocol (ARP), and Data Center Bridging
Capability Exchange (DCBX). Redundant server Node groups have mechanisms such as bridge protocol data unit (BPDU)
guard and storm control to detect and disable loops across ports. While firewalls, routers, switches, and other network
devices can be connected to redundant server Node groups, only host-facing protocols and Layer 2 traffic is processed. To
process network-facing protocols, such as Spanning Tree Protocols (STPs) and Layer 3 protocol traffic, the network device
must connect to a Node device in the network Node group. We discuss the network Node group next.
Continued on the next page.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–79


Juniper Networks Design—Data Center
Redundant Server Node Groups (contd.)
Members of a LAG from a server to the QFabric system can be distributed across Node devices in the redundant server Node
group to provide a redundant connection. In cases where redundancy is not built into the software application on the server,
a redundant server Node group is desirable.
One of the Node devices in the redundant server Node group is selected as active and the other is the backup. The active
Node device is responsible for local Routing Engine (RE) functions and both Node devices perform the Packet Forwarding
Engine functions. If the active Node device fails, the backup Node device assumes the active role.

Note
Redundant server Node groups are also referred
to as PTORs or pair of TORs in some cases.

Chapter 5–80 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Network Node Group


A set of Node devices running server-facing protocols as well as network protocols such as STP, OSPF, PIM, and BGP to
external devices like routers, switches, firewalls, and load balancers is known as a network Node group. This Node group
exists by default and is named NW-NG-0. This name cannot be changed. Currently you can associate up to eight Node
devices with this group.
The network Node group can also fill the redundant server Node group's function of running server-facing protocols. Only one
network Node group can run in a QFabric system at a time.
In a redundant server Node group, the local CPUs on the participating Node devices perform both the RE and Packet
Forwarding Engine functionality. In a network Node group, the local CPUs perform only the Packet Forwarding Engine
function. Just as in a traditional modular switch, the RE function of the network Node group is located externally in the
Director group.
Note that if the QFabric System is only used for Layer 2 operations, the Network Node group does not strictly require a
dedicated Node device. In all other situations, the Network Node group does require at least one dedicated Node device.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–81


Juniper Networks Design—Data Center

Junos Fusion
The slide highlights the topic we discuss next.

Chapter 5–82 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Junos Fusion
Juniper Networks® Junos® Fusion addresses the challenges posed by traditional network architectures and provides
customers with a bridge from legacy networks to software-defined cloud networks. This innovative architecture is based on
three design principles: simplicity at scale, smart, and flexible.
A highly scalable fabric, Junos Fusion collapses multitier architectures into a single tier, reducing the number of devices in
the data center network and cutting CapEx. Junos Fusion is a centrally managed fabric that features plug-and-play
provisioning and auto-configuration capabilities, which greatly simplifies operations at scale and reduces OpEx while
accelerating the deployment of new applications and services.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–83


Juniper Networks Design—Data Center

Fusion Terminology: Part 1


The slide shows the Junos Fusion topology. It includes Aggregation devices (only on Aggregation device is currently
supported), Satellite devices, Cascade ports, Upstream ports, and Extended ports.
Satellite devices have following characteristics:
• Minimal management/control/data plane;
• Managed entirely by the aggregation device depending upon device capability;
• Based on Juniper Standard Linux OS;
• Default forwarding behavior provides mux-demux functionality supporting 802.1BR;
• Provide a programmatic interface enabling external controller device to exploit its h/w capabilities (JSON API);
• Based on open standards;
• Plug-and-play deployment model; and
• On device SW storage (images, core dump, diagnostic data for RMA purposes etc.) is an optimization and
optional requirement.
Unlike the Satellite device, the Aggregation device is a logical entity. In its simplest form, it is a set of aggregation switching
controlling the satellite devices. It provides the smart core packet switching function as well as a controlling function, which
programs the satellite devices.

Chapter 5–84 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Fusion Terminology: Part 2


The slide lists some of the basic functions of Aggregation devices (ADs) and Satellite devices (SDs).

www.juniper.net Ethernet Fabric Architectures • Chapter 5–85


Juniper Networks Design—Data Center

Aggregation Device: Internal and External Ports


The cascade ports on the Aggregation device are considered internal ports. Internal ports are used for both the inband
management of the Satellite devices by the Aggregation devices as well as for the forwarding of user data from one Satellite
device to another. External ports are standard network interfaces that can run any Junos-supported Layer 2 or Layer 3
protocol.

Chapter 5–86 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Junos Fusion Deployment Models


There are two possible deployment models...
1. Multihomed Aggregation Device Cluster (currently not supported): The details of this models are still yet to be
determined. This model allows for Extended ports to be controlled by two brains (i.e., two Aggregation devices).
2. Singlehomed Aggregation Device Cluster: In this deployment model, Satellite devices are connected and
controlled by a single Aggregation device. For SD redundancy, LAG interfaces can be used for Cascade and
Upstream ports. For server redundancy, MC-LAG can be used to connect a single server to two singlehomed AD
Clusters.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–87


Juniper Networks Design—Data Center

Single Point of Management


Junos Fusion allows for a single AD to control up to 64 SDs (128 SDs in the near future). All configuration and management
of the SDs is performed on the AD. There is virtually no need to access the CLI of the SD devices directly.
The slide shows a multihomed AD cluster which is not currently supported. The details of SD management in this model are
yet to be determined.

Chapter 5–88 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Dual Control Planes


Although not currently supported, Multihomed AD clusters will allow for redundant control planes and Active/Active Routing
Engines (REs).

www.juniper.net Ethernet Fabric Architectures • Chapter 5–89


Juniper Networks Design—Data Center

Scale
Currently, Junos Fusion allows for a single AD to control up to 64 SDs. If each SD is a QFX5100-96S, the Junos Fusion could
have a total of 6144 Extended Ports.

Chapter 5–90 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Supported Devices
The slide shows the current support ADs and SDs.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–91


Juniper Networks Design—Data Center

Modes of Operation
There are two proposed modes of operation for a Junos Fusion:
1. Extended Mode: The basic premise of extended operating mode is based on “simple-edge, smart-core”. The AD
contains a highly scalable L2/L3 networking stack and advanced forwarding plane based on custom silicon.
Each SD, appears as a logical line card in the AD—its physical port configuration; control/forwarding plane state
entirely resides on the aggregation device. In this mode, SD is auto-discovered and managed from the AD. An
SD forwards incoming traffic on an extended port to AD inserting the .1BR defined encapsulation that contains
the EPID associated with the extended port. The AD upon receiving the packet with .1BR encapsulation will
extract the EPID and use the tag to perform the MAC lookup for the incoming frame to determine the outbound
extended port. To forward a frame destined to an extended port, the AD inserts the .1BR encapsulation setting
the EPID field to destined extended port. The SD forwards the incoming traffic on the upstream port to the
destined extended port by extracting the EPID field from the .1BR header and looking up the
“EPID-to-extended-port” mapping table built by the satellite management protocol. In this mode of operation,
local switching on the SD is not possible. All incoming traffic must be sent to the AD for the lookup.
2. Program Mode (not currently supported): In program mode, the SD provides a well-defined programmatic
extension (JSON API) that allows an external application, running on aggregation device or some remote device,
to program the forwarding plane of satellite device. This programming involves identifying specific traffic flows
in the data path based on the fields in the L2/L3 header and specifying the forwarding action that overrides
default .1BR forwarding decision. In program mode, both .1BR interface and programmatic extensions coexist
and complement each other—.1BR provides simplicity whereas programmatic interface provides flexibility by
exploiting the forwarding chip capabilities of the SD.

Chapter 5–92 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Extended Mode
As described on the previous slide, Extended mode provides for standard IEEE 802.1BR forwarding behavior. No matter what
interface an incoming frame is destined to, it is always passed to the AD (using IEEE 802.1BR encapsulation) for the MAC
table lookup.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–93


Juniper Networks Design—Data Center

Extended Mode Forwarding


The slide shows how Ethernet frames are forwarded within a Junos Fusion system (details can be found in the previous
slides).

Chapter 5–94 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Program Mode
Although not currently supported, program mode will allow for the switching behavior of a SD device to be modified. This
reprogramming is made possible through the JSON-RPC API. This API will allows for third party applications to be developed
that might allow for a SD to perform local switching, packet filtering, etc. Program mode will work in conjunction with IEEE
802.1BR forwarding (see the following slides).

www.juniper.net Ethernet Fabric Architectures • Chapter 5–95


Juniper Networks Design—Data Center

Potential Forwarding Application


The slide shows some of the features that could possibly be made available by a third party application using the JSON-RPC
API.

Chapter 5–96 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Program Mode Forwarding


The slide shows the forwarding behavior which might be made available when program mode is supported.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–97


Juniper Networks Design—Data Center

High Level Software Architecture


The slide shows the software architecture of a Junos Fusion system. The AD will be the only device that runs the Junos
Operation System. The AD will use LLDP for autodiscovery of satellite devices. The AD will use the IEEE 802.1BR Control and
Status Protocol (CSP) for satellite management. And, eventually, the JSON-RPC API will be made available to reprogram the
forwarding behavior of an SD. Notice that an SD does not run Junos. Instead, an SD runs Yocto Linux OS.

Chapter 5–98 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Naming Convention
The slide shows the Extended Port naming convention for a Junos Fusion. Each SD will be represented by an FPC slot
number that must be greater than or equal to 100. Other than that, the Extended ports follow the standard interface naming
convention of prefix-fpc/pic/port.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–99


Juniper Networks Design—Data Center

Auto LAG
You can add more than one link between the AD and SD. When you do, there is no need to configure LAG, instead, the link
are automatically placed into a LAG bundle.

Chapter 5–100 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Satellite Software Upgrades


Satellite are placed, through configuration, into software upgrade groups. A software upgrade group is a group of satellite
devices that are designated to run the same satellite software version using the same satellite software package. When a
satellite device is added to a Junos Fusion, the aggregation device checks if the satellite device is using an FPC ID that is
included in a satellite software upgrade group.If the device is connected to a satellite device that is using an FPC ID that is
part of a satellite software upgrade group, the device—unless it is already running the same version of satellite software—
upgrades its satellite software using the satellite software associated with the satellite software upgrade group. When the
satellite software package associated with an existing satellite software group is changed, the satellite software for all
member satellite devices is upgraded using a throttled upgrade. The throttled upgrade ensures that only a few satellite
devices are updated at a time to minimize the effects of a traffic disruption due to too many satellite devices upgrading
software simultaneously.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–101


Juniper Networks Design—Data Center

EVPN with VXLAN Encapsulation


Data center operators have the flexibility to deploy small or large Junos Fusion pods and interconnect them using EVPN with
VXLAN encapsulation. These standards-based network virtualization technologies are commonly used for transporting data
within and across data centers. With EVPN and VXLAN support, customers can seamlessly connect Junos Fusion pods within
and across data centers with optimal traffic forwarding. Customers also benefit from seamless connectivity in a multivendor
environment.

Chapter 5–102 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Use Cases
The slide shows the three use cases for Junos Fusion; Provider Edge, Data Center, and Campus.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–103


Juniper Networks Design—Data Center

Use Case Comparison


The slide shows a use case comparison for Junos Fusion.

Chapter 5–104 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

We Discussed:
• Key concepts and components of a Virtual Chassis;
• Key concepts and components of a VCF;
• Key concepts and components of a QFabric System; and
• Key concepts and components of Junos Fusion.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–105


Juniper Networks Design—Data Center

Review Questions
1.

2.

3.

Chapter 5–106 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Lab: Ethernet Fabric


The slide provides the objective for this lab.

www.juniper.net Ethernet Fabric Architectures • Chapter 5–107


Juniper Networks Design—Data Center

Chapter 5–108 • Ethernet Fabric Architectures www.juniper.net


Juniper Networks Design—Data Center

Chapter 6: IP Fabric Architecture


Juniper Networks Design—Data Center

We Will Discuss:
• The reasons for the shift to IP fabrics;
• The design considerations for an IP fabric;
• How to scale an IP fabric; and
• The design considerations of VXLAN.

Chapter 6–2 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

The Shift to IP Fabrics


The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net IP Fabric Architecture • Chapter 6–3


Juniper Networks Design—Data Center

IP-based Data Centers


Next generation data centers have different requirements than the traditional data center. One major requirement in a next
generation data center is that traffic is load balanced over the multiple paths between rack in a data center. Also, a
requirement that is becoming less and less necessary is the ability of the underlying switch fabric to carry native Ethernet
frames between VMs/server in different racks. Some of the major reasons for this shift are...
1. IP-only Data: Many data centers simply need IP connectivity between racks of equipment. There is less and less
need for the stretching of Ethernet networks over the fabric. For example, one popular compute and storage
methodology is Apache’s Hadoop. Hadoop allows for a large set of data (i.e. like a single Tera-bit file) to be
stored in chunks across many servers in a data center. Hadoop also allows for the stored chunks of data to be
processed in parallel by the same servers they are stored upon. The connectivity between the possibly
hundreds of servers needs only to be IP-based.
2. Overlay Networking: Overlay networking allows for Layer 2 connectivity between racks however, instead of layer
2 frames being transferred natively over the fabric, they are tunneled using a different outer encapsulation.
virtual eXtensible local area network (VXLAN), multiprotocol label switching (MPLS), and generic routing
encapsulation (GRE) are some of the common tunneling protocols used to transport Layer 2 frames of the
fabric of a data center. We will discuss the details of VXLAN later in this chapter. One of the benefit of overlay
networking is that when there is a change to layer 2 connectivity between VMs/servers (the overlay network),
the underlying fabric (underlay network) can remain relatively untouched and unaware of the changes occurring
in the overlay network.

Chapter 6–4 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

Layer 2 Transport Network


The diagram above shows a typical scenario with a Layer 2 underlay network with attached servers that host VMs as well as
virtual switches. The example shows the underlay network as an Ethernet fabric. The fabric solves some of the customer
requirements including load balancing over equal cost paths (assuming Virtual Chassis Fabric) as well as having no blocked
spanning tree ports in the network. However, this topology does not solve the VM agility problem or the 802.1q VLAN overlap
problem. Also, as 802.1q VLANs are added to the virtual switches, those same VLANs must be provisioned on the underlay
network. Managing the addition, removal, and movement of VMs (and their VLANs) for the 1000s of customers would be a
nightmare for the operators of the underlay network.

www.juniper.net IP Fabric Architecture • Chapter 6–5


Juniper Networks Design—Data Center

Overlay Networking
Overlay networking can help solve many of the requirements and problems discussed in the previous slides. This slide shows
the addition of an overlay network that includes the use of VXLAN. The overlay network consists of the virtual switches and
the VXLAN tunnel endpoints (VTEPs). A VTEP will encapsulate the Ethernet frames that it receives from the virtual switch into
IP and forward the resulting IP packet to the remote VTEP. The underlay network simply needs to forward IP packets between
VTEPs. The receiving VTEP will de-encapsulate the VXLAN IP packets and then forward the resulting Ethernet Frame to the
appropriate VM. Adding and removing VMs from the data center has no effect on the underlay network. The underlay
network simply needs to provide IP connectivity between the VTEPs.
When designing the underlay network in this scenario, you have a few choices. You can use an Ethernet fabric like Virtual
Chassis (VC), Virtual Chassis Fabric (VCF), or QFabric. All of these are valid solutions. Because all of the traffic crossing the
underlay network is IP, the option for an IP fabric becomes available. The choice of underlay network comes down to scale
and future growth. An IP fabric is considered to be the most scalable underlay solution for a few reasons as discussed later
in the chapter.

Chapter 6–6 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

IP Fabric Routing Design


The slide highlights the topic we discuss next.

www.juniper.net IP Fabric Architecture • Chapter 6–7


Juniper Networks Design—Data Center

A Three Stage Clos Network


In the 1950s, Charles Clos first wrote about his idea of a non-blocking, multistage, telephone switching architecture that
would allow calls to be completed. The switches in his topology are called crossbar switches. A Clos network is based on a
three-stage architecture, an ingress stage, a middle stage, and an egress stage. The theory is that there are multiple paths
for a call to be switched through the network such that calls will always be connected and not "blocked" by another call. The
term Clos “fabric” came about later as people began to notice that the pattern of links looked like threads in a woven piece
of cloth.
You should notice that the goal of the design is to provide connectivity from one ingress crossbar switch to an egress
crossbar switch. Notice that there is no need for connectivity between crossbar switches that belong to the same stage.

Chapter 6–8 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

An IP Fabric is Based on a Clos Fabric

The diagram shows an IP Clos Fabric using Juniper Networks switches. In an IP Fabric the Ingress and Egress stage crossbar
switches are called Leaf nodes. The middle stage crossbar switches are called Spine nodes. Most diagrams of an IP Fabric
do not present the topology with 3 distinct stages as shown on this slide. Most diagrams show an IP Fabric with the Ingress
and Egress stage combined as a single stage. It would be like taking the top of the diagram and folding it over onto itself with
all Spines nodes on top and all Leaf nodes on the bottom of the diagram (see the next slide).

www.juniper.net IP Fabric Architecture • Chapter 6–9


Juniper Networks Design—Data Center

Spine and Leaf Architecture: Part 1


To maximize the throughput of the fabric, each Leaf node should have a connection to each Spine node. This ensures each
server-facing interface is always two hops away from any other server-facing interfaces. This creates a highly resilient fabric
with multiple paths to all other devices. An important fact to keep in mind is that a member switch has no idea of its location
(Spine or Leaf) in an IP Fabric. The Spine or Leaf function is simply a matter of a device’s physical location in the fabric. In
general, the choice of router to be used as a Spine nodes should be partially based on the interface speeds and number of
ports that it supports. The example on the slide shows an example where every Spine node is a QFX5100-24q. The
QFX5100-24q supports (32) 40GbE interfaces and was literally designed by Juniper to be a Spine node.

Chapter 6–10 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

Spine and Leaf Architecture: Part 2


The slide shows that there are four distinct paths (1 path per Spine node) between Host A and Host B across the fabric. In an
IP Fabric, the main goal of your design should be that traffic is automatically load balanced over those equal cost paths
using a hash algorithm (keeping frames from same flow on same path).

www.juniper.net IP Fabric Architecture • Chapter 6–11


Juniper Networks Design—Data Center

IP Fabric Design Options


IP Fabric are generally structured in either a 3-stage topology or a 5-stage topology. A 3-stage topology is used in small to
medium deployments. We cover the design of a 3-stage fabric in the upcoming slides. A 5-stage topology is used in a
medium to large deployment. Although we do not cover the design of a 5-stage fabric, you should know that the design of a
5-stage fabric is quite complicated and there are many BGP design options.

Chapter 6–12 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

Layer 3 Connectivity
Remember that your IP Fabric will be forwarding IP data only. Each node will be an IP router. In order to forward IP packets
between routers, they need to exchange IP routes. So, you have to make a choice between routing protocols. You want to
ensure that your choice of routing protocol is scalable and future proof. As you can see by the chart, BGP is the natural
choice for a routing protocol.

www.juniper.net IP Fabric Architecture • Chapter 6–13


Juniper Networks Design—Data Center

IBGP: Part 1
IBGP is a valid choice as the routing protocol for your design. IBGP peers almost always peer to loopback addresses as
opposed to physical interface addresses. In order to establish a BGP session (over a TCP session), a router must have a route
to the loopback address of its neighbor. To learn the route to a neighbor an Interior Gateway Protocol (IGP) like OSPF must be
enabled in the network. One purpose of enabling an IGP is simply to ensure every router know how to get to the loopback
address of all other routers. Another problem that OSPF will solve is determining all of the equal cost paths to remote
destinations. For example, router A will determine from OSPF that there are 2 equal cost paths to reach router B. Now
router A can load balance traffic destined for router B’s loopback address (IBGP learned routes, see next few slides) across
the two links towards router B.

Chapter 6–14 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

IBGP: Part 2
There is a requirement in an IBGP network that if one IBGP router needs to advertise an IBGP route, then every other IBGP
router must receive a copy of that route (to prevent black holes). One way to ensure this happens is to have every IBGP router
peer with every other IBGP router (a full mesh). This works fine but it does not scale (i.e., add a new router to your IP fabric
and you will have to configure every router in your IP fabric with a new peer). There are two ways to help scale the full mesh
issue; route reflection or confederations. Most often, it is route reflection that is chosen (it is easy to implement). It is
possible to have redundant route reflectors as well (shown on the slide). It is best practice to configure the Spine nodes as
route reflectors.

www.juniper.net IP Fabric Architecture • Chapter 6–15


Juniper Networks Design—Data Center

IBGP: Part 3
You must design your IP Fabric such that all routers load balance traffic over equal cost paths (when they exist) towards
remote networks. Each router should be configured for BGP multipath so that they will load balance when multiple BGP
routes exist. The slide shows that router A and B advertise the 10.1.1/24 network to RR-A. RR-A will use both route for
forwarding (multipath) but will chose only one of those routes (the one from router B because it B has the lowest router ID) to
send to router C (a Leaf node) and router D (a Spine node). Router C and router D will receive the route for 10.1.1/24. Both
copies will have a BGP next hop of router B’s loopback address. This is the default behavior of route advertisement and
selection in the IBGP with route reflection scenario.
Did you notice the load balancing problem (Hint: the problem is not on router C)? Since router C has two equal cost path to
get to router B (learned from OSPF), router C will load balance traffic to 10.1.1/24 over the two uplinks towards the Spine
routers. The load balancing problem lies on router D. Since router D received a single route that has a BGP next hop of router
B’s loopback, it forwards all traffic destined to 10.1.1/24 towards router B. The path to router A (which is an equal cost path
to 10.1.1/24) will never be used in this case. The next slide discusses the solution to this problem.

Chapter 6–16 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

IBGP: Part 4
The problem on RR-A is that it sees the routes received from routers A and B, 10.1.1/24, as a single route that has been
received twice. If an IBGP router receives different versions of the same route it is supposed to make a choice between them
and then advertise the one, chosen route to its appropriate neighbors. One solution to this problem is to make every Spine
node a route reflector. This would be fine in a small fabric but probably would make sense when there are 10s of Spine
nodes. Another option would be to make each of the advertisements from router A and B look like unique routes. How can we
make the multiple advertisements of 10.1.1/24 from router A and B appear to be unique routes? There is a draft RFC
(draft-ietf-idr-add-paths) that defines the ADD-PATH capability which does just that; make the advertisements look unique. All
Spine routers in the IP Fabric should support this capability for it to work. Once enabled, routers advertise and evaluate
routes based on a tuple of the network and its path ID. In the example, router A and B advertise the 10.1.1/24 route.
However, this time, RR-A and router D support the ADD-PATH capability, RR-A attaches a unique path ID to each route and is
able to advertise both routes to router D. When the routes arrive on the router D, router D installs both routes in its routing
table (allowing it to load balance towards routers A and B.)

www.juniper.net IP Fabric Architecture • Chapter 6–17


Juniper Networks Design—Data Center

EBGP: Part 1
EBGP is also a valid design to use in your IP Fabric. You will notice that the load balancing problem is much easier to fix in the
EBGP scenario. For example, there will be no need for the routers to support any draft RFCs! The first requirement in the
design of your IP Fabric is that each router should be in its own unique AS. You can use AS numbers from the private or
public range or, if you will need thousands of AS numbers, you can use 32-bit AS numbers.

Chapter 6–18 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

EBGP: Part 2
In an EBGP design, there is no need for route reflectors or an IGP. The BGP peering sessions parallel the physical wiring. For
example, every Leaf node has a BGP peering session with every Spine node. There is no leaf-to-leaf or spine-to-spine BGP
sessions just like there is no leaf-to-leaf or spine-to-spine physical connectivity. EBGP peering is done using the physical
interface IP addresses (not loopback interfaces). To enable proper load balancing, all routers need to be configured for
multipath multiple-as as well as a load balancing policy that is applied to the forwarding table, as shown.
policy-options {
policy-statement PFE-LB {
then {
load-balance per-packet;
}
}
}
...
routing-options {
forwarding-table {
export PFE-LB;
}
}

www.juniper.net IP Fabric Architecture • Chapter 6–19


Juniper Networks Design—Data Center

EBGP: Part 3
The slide shows that the router in AS64516 and AS64517 are advertising 10.1.1./24 to their 2 EBGP peers. Because
multipath multiple-as is configured on all routers, the receiving routers in AS64512 and AS64513 will install both
routes in their routing table and load balance traffic destined to 10.1.1/24.

Chapter 6–20 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

EBGP: Part 4
The slide shows that the routers in AS64512 and AS64513 are advertising 10.1.1./24 to all of their EBGP peers (all Leaf
nodes). Since multipath multiple-as is configured on all routers the receiving router in the slide, the router in
AS64514, will install both routes in its routing table and load balance traffic destined to 10.1.1/24.

www.juniper.net IP Fabric Architecture • Chapter 6–21


Juniper Networks Design—Data Center

Edge Design
The slide shows one example of an Edge design including multiple interconnected data centers. You should notice that both
IP Fabrics are using the same AS numbers. Normally, you would think that this would cause a routing problem. For example,
if the AS 64513 route in DC1 receives a route that had passed through the AS64513 router in DC2, the route would be
dropped by the router in DC1. It would drop the route because it will have detected an AS PATH loop. To make it possible to
advertise BGP routes between data centers and ensure that routes are dropped, you can configure the edge routers to
perform AS override when advertising routes to the remote DC. This feature causes the Edge routers to replace every AS
number in the AS PATH of the routes to be advertise to their own AS. That way, when the routes arrive at the remote data
center, they appear to have come from one AS (although the AS number will appear several times in the AS PATH attribute).

Chapter 6–22 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

IP Fabric Scaling
The slide highlights the topic we discuss next.

www.juniper.net IP Fabric Architecture • Chapter 6–23


Juniper Networks Design—Data Center

Scaling
To increase the overall throughput of an IP Fabric, you simply need to increase the number of Spine devices (and the
appropriate uplinks from the Leaf nodes to those Spine nodes). If you add one more Spine node to the fabric, you will also
have to add one more uplink to each Leaf node. Assuming that each uplink is 40GbE, each Leaf node can now forward an
extra 40Gbps over the fabric.
Adding and removing both server-facing ports (downlinks from the Leaf nodes) and Spine nodes will affect the
oversubscription (OS) ratio of a fabric. When designing the IP fabric, you must understand OS requirements of your
customer. For example, does your customer need line rate forwarding over the fabric? Line rate forwarding would equate to
1-to-1 (1:1) OS. That means the aggregate server-facing bandwidth is equal to the aggregate uplink bandwidth. Or, maybe
your customer would be perfectly happy with a 3:1 OS of the fabric. That is, the aggregate server-facing bandwidth is 3 times
that of the aggregate uplink bandwidth. Most customers will probably not require or desire to design around a 1:1 OS.
Instead, they will need to decide (based on their normal bandwidth usage), what OS ratio make the most sense. The next few
slides discuss how to calculate OS ratios of various IP fabric designs.

Chapter 6–24 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

3:1 Topology
The slide shows a basic 3:1 OS design. All Spine nodes, four in total, are qfx5100-24q routers that each have (32) 40GbE
interfaces. All leaf nodes, 32 in total, are qfx5100-48s routers that have (6) 40GbE uplink interfaces and (48) 10GbE
server-facing interfaces. Each of the (48) 10GbE ports for all 32 Spine nodes will be fully utilized (i.e., attached to
downstream servers). That means that the total server-facing bandwidth is 48 x 32 x 10Gbps which equals 15360 Gbps.
Each of the 32 Leaf nodes has (4) 40GbE Spine-facing interfaces. That means, that the total uplink bandwidth is 4 x 32 x
40Gbps which equals 5120 Gbps. The OS ratio for this fabric is 15360:5120 or 3:1.
An interesting thing to note is that if you remove any number of Leaf nodes, the OS ratio does not change. For example, what
would happen to the OS ratio if their were only 31 nodes. The server facing bandwidth would be 48 x 31 x 10Gbps which
equals 14880 Gbps. The total uplink bandwidth is 4 x 31 x 40Gbps which equals 4960 Gbps. The OS ratio for this fabric is
14880:4960 or 3:1. This fact actually makes your design calculations very simple. Once you decide on an OS ratio and
determine the number of Spine nodes that will allow that ratio, you can simply add and remove Leaf nodes from the topology
without effecting the original OS ratio of the fabric.

www.juniper.net IP Fabric Architecture • Chapter 6–25


Juniper Networks Design—Data Center

2:1 Topology
The slide shows a basic 2:1 OS design in which two Spine nodes were added to the topology from the last slide. All Spine
nodes, six in total, are qfx5100-24q routers that each have (32) 40GbE interfaces. All leaf nodes, 32 in total, are
qfx5100-48s routers that have (6) 40GbE uplink interfaces and (48) 10GbE server-facing interfaces. Each of the (48) 10GbE
ports for all 32 Spine nodes will be fully utilized (i.e., attached to downstream servers). That means that the total
server-facing bandwidth is still 48 x 32 x 10Gbps which equals 15360 Gbps. Each of the 32 Leaf nodes has (6) 40GbE
Spine-facing interfaces. That means, that the total uplink bandwidth is 6 x 32 x 40Gbps which equals 7680 Gbps. The OS
ratio for this fabric is 15360:7680 or 2:1.

Chapter 6–26 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

1:1 Topology
The slide shows a basic 1:1 OS design. All Spine nodes, six in total, are qfx5100-24q routers that each have (32) 40GbE
interfaces. All leaf nodes, 32 in total, are qfx5100-48s routers that have (6) 40GbE uplink interfaces and (48) 10GbE
server-facing interfaces. There are many ways that an 1:1 OS ratio can be attained. In this case, although the Leaf nodes
each have (48) 10GbE server-facing interfaces, we are only going to allow 24 servers to be attached at any given moment.
That means that the total server-facing bandwidth is still 24 x 32 x 10Gbps which equals 7680 Gbps. Each of the 32 Leaf
nodes has (6) 40GbE Spine-facing interfaces. That means, that the total uplink bandwidth is 6 x 32 x 40Gbps which equals
7680 Gbps. The OS ratio for this fabric is 7680:7680 or 1:1.

www.juniper.net IP Fabric Architecture • Chapter 6–27


Juniper Networks Design—Data Center

Best Practices
When designing an IP fabric you should follow some best practices. Remember, two of the main goals of an IP fabric design
(or an Clos design) is to provide a non-blocking architecture that also provides predictable load-balancing behavior.
Some of the best practices that should be followed include...
• All Spine nodes should be the exact same type of router. They should be the same model and they should also
have the same line cards installed. This helps the fabric to have a predictable load balancing behavior.
• All Leaf nodes should be the exact same type of router. Leaf nodes do not have to be the same router as the
Spine nodes. Each Leaf node should be the same model and they should also have the same line cards
installed. This helps the fabric to have a predictable load balancing behavior.
• Every Leaf node should have an uplink to every Spine node. This helps the fabric to have a predictable load
balancing behavior.
• All uplinks from Leaf node to Spine node should be the exact same speed. This helps the fabric to have
predictable load balancing behavior and also helps with the non-blocking nature of the fabric. For example, let
us assume that a Leaf has one 40GbE uplink and one 10GbE uplink to the Spine. When using the combination
of OSPF (for loopback interface advertisement and BGP next hop resolution) and IBGP, when calculating the
shortest path to the BGP next hop, the bandwidth of the links will be taken into consideration. OSPF will most
likely always chose the 40GbE interface for forwarding towards remote BGP next hops. This essentially blocks
the 10GbE interface from ever being used. In the EBGP scenario, the bandwidth will not be taken into
consideration, so traffic will be equally load balanced over the two different speed interfaces. Imagine trying to
equally load balance 60 Gbps of data over the two links, how will the 10GbE interface handle 30Gbps of traffic?
The answer is...it won’t.

Chapter 6–28 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

VXLAN
The slide highlights the topic we discuss next.

www.juniper.net IP Fabric Architecture • Chapter 6–29


Juniper Networks Design—Data Center

VXLAN
VXLAN is defined in RFC 7348. It describes a scheme to tunnel (overlay) Layer 2 networks over a Layer 3 network. Each
overlay network is termed a VXLAN segment and is identified by a 24 bit segment ID called the VXLAN Network Identifier
(VNI). Usually, an tenant’s 802.1q VLAN is mapped to a single VNI. The 24 bit segment ID allow for ~16M VXLAN segments to
coexist within the same administrative domain.

Chapter 6–30 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

VXLAN Benefits
The slide lists some of the benefits of using a VXLAN overlay scheme.

www.juniper.net IP Fabric Architecture • Chapter 6–31


Juniper Networks Design—Data Center

VXLAN Packet Format


The VXLAN packet consist of the following:
1. Original Ethernet Frame: The Ethernet frame being tunneled over the underlay network.
2. VXLAN Header (64 bits): Consists of an 8 bit flags field, the VNI, and two reserved fields. The I flag must be set
to 1 and the other 7 reserved flags must be set to 0.
3. Outer UDP Header: Usually contain the well known destination UDP port 4789. Some VXLAN implementations
allow for this destination port to be configured to some other value. The destination port is a hash of the inner
Ethernet frames header.
4. Outer IP Header: The source address is the IP address of the sending VXLAN Tunnel End Point (VTEP). The
destination address is the IP address of the receiving VTEP.
5. Outer MAC: As with any packet being sent over a layer 3 network, the source and destination MAC addresses will
change at each hop in the network.
6. Frame Check Sequence (FCS): New FCS for the outer Ethernet frame.

Chapter 6–32 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

The Inbound VTEP


The VTEP is the end point of the VXLAN tunnel. A VTEP usually exists on the same server that is hosting the tenant VMs and
virtual switches. It performs the mapping between a particular tenant’s 802.1q VLAN and the VNI. The building of these
mapping tables can be performed manually or it can be performed dynamically by an SDN controller (discussed later in this
content). A VTEP will take Ethernet frames from the VMs and encapsulate them into standard VXLAN format (shown in the
next few slides). The VXLAN packet is then sent to the remote VTEP over the underlay network. Another function of the VTEP
is to perform MAC learning. As Ethernet frames are received from the VMs it learns the source MAC address. The next slide
will discuss how MAC addresses are learned from VXLAN packets received from the underlay network.

www.juniper.net IP Fabric Architecture • Chapter 6–33


Juniper Networks Design—Data Center

The Receiving VTEP


The receiving VTEP takes the layer 3 packets received from the underlay network and strips the outer MAC, outer IP/UDP
header, and VXLAN header. However, to help build its MAC learning table, it maps the source MAC address of the original
(inner) Ethernet frame to the source IP address (remote VTEPs IP address) of the received VXLAN packet. The VTEP then
forwards the original Ethernet frame to the destination VM.

Chapter 6–34 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

VXLAN Gateway
Based on what we have learned, the VTEP allows VMs on remote servers (that also have VTEPs) to communicate with each
other. What happens when a VM wants to communicate with the Internet or with a bare metal server (BMS)? In the case of
the Internet, how will the VXLAN packets from the sending VTEP get decapsulated before they leave the data center and are
passed on to the Internet. You must have a VXLAN Gateway to perform this function. A VXLAN Gateway is a networking device
that has the ability to perform the VTEP function. Some of the scenarios that a VXLAN Gateway can help with include the
following:
• VM to Internet connectivity;
• VM to BMS connectivity;
• VM on one VNI to communicate with a VM on another VNI (using IRB interfaces); and
• DC to DC connectivity (data center interconnection).

www.juniper.net IP Fabric Architecture • Chapter 6–35


Juniper Networks Design—Data Center

VXLAN MAC Learning


This slide discusses the MAC learning behavior of a VTEP. The next few slides will discuss the details of how remote MAC
addresses are learned by VTEPs.

Chapter 6–36 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

BUM Traffic
The slide discusses the handling of BUM traffic by VTEPs according to the VXLAN standard model. In this model, you should
note that the underlay network must support a multicast routing protocol, preferably some form of Protocol Independent
Multicast Sparse Mode (PIM-SM). Also, the VTEPs must support Internet Group Membership Protocol (IGMP) so that they can
inform the underlay network that it is a member of the multicast group associated with a VNI.
For every VNI used in the data center, there must also be a multicast group assigned. Remember that there are 2^24 (~16M)
possible VNIs so your customer will need 2^24 group addresses. Luckily, 239/8 is a reserved set of organizationally scoped
multicast group addresses (2^24 group addresses in total) that can be used freely within your customer’s data center.

www.juniper.net IP Fabric Architecture • Chapter 6–37


Juniper Networks Design—Data Center

Building the Multicast Tree


The slide shows an example of a PIM-SM enabled network where the (*,G) rendezvous point tree (RPT) is established from
VTEP A to R1 and finally to the rendezvous point (RP). This is the only part of the RPT shown for simplicity but keep in mind
that each VTEP that belongs to 239.1.1.1 will also build its branch of the RPT (including VTEB B).

Chapter 6–38 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

Multicast Forwarding
When VTEP B receives a broadcast packet from a local VM, VTEP B encapsulates the Ethernet frame into the appropriate
VXLAN/UDP/IP headers. However, it sets the destination IP address of the outer IP header to the VNI’s group address
(239.1.1.1 on the slide). Upon receiving the multicast packet, VTEP B’s DR (the PIM router closest to VTEP B) encapsulates
the multicast packet into unicast PIM register messages that are destined to the IP address of the RP. Upon receiving the
register messages, the RP de-encapsulates the register messages and forwards the resulting multicast packets down the
(*,G) tree. Upon receiving, the multicast VXLAN packet, VTEP A does the following:
1. Strips the VXLAN/UDP/IP headers;
2. Forwards the broadcast packet towards the VMs using the virtual switch;
3. If VTEP B was unknown, VTEP A learns the IP address of VTEP B; and
4. Learns the remote MAC address of the sending VM and maps it to VTEP B’s IP address.

In the designing IP Fabric-based data center, you must ensure that the appropriate devices support PIM-SM, IGMP, and the
PIM DR and RP functions.

www.juniper.net IP Fabric Architecture • Chapter 6–39


Juniper Networks Design—Data Center

MAC Learning with EVPN


Juniper Networks recommends using EVPN for remote MAC learning in a VXLAN overlay network. In this manner, the underlay
network does not need to support multicast at all. Instead, EVPN MP-IBGP sessions are established between PEs (VTEPs).
Route reflectors can also be used to scale the network. When MACs are learned locally, they are advertised by the attached
PE to all remote PE participating in the same EVPN. Another major benefit of EVPN is that it allows PEs (VTEPs) to perform
the proxy-ARP function. That is, when a local server ARPs for a remote IP address, if the PE knows the IP to MAC binding, it
will respond with an ARP reply instead of broadcast to the remote PEs.

Chapter 6–40 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

EVPN Route Types


The address family identifier (AFI) and subsequent address family identifier (SAFI) for EVPN’s MP-BGP advertisements are
25 and 70, respectively. The slide shows the EVPN route types and their purpose.

www.juniper.net IP Fabric Architecture • Chapter 6–41


Juniper Networks Design—Data Center

EVPN MP-IBGP Overlay: Part 1


Previously, we discussed that the IP fabric underlay network should be established to support load balancing to the
loopbacks of every router and IP destination in the data center. The underlay network could be established using IBGP or
EBGP (EBGP is highly recommended). For the overlay network, specifically for the advertisement of EVPN routes, we
recommend to MP-IBGP using route reflectors to help the network scale. How can you have an EBGP underlay network and a
MP-IBGP network at the exact same time? How can a router have to AS numbers at the exact same time? This is done using
the local-as statement in Junos. By configuring a local autonomous system on a per neighbor basis, the router can
participate in two ASs at the same time. One added benefit to using EVPN for signaling is that the specification allows for a
server to be actively attached to two different access routers (PEs) at the same time. From Server A’s perspective, the two
upstream interfaces are configured as a LAG. From R27’s and R28’s perspective, the connections actively used for
forwarding using EVPNs loop prevention mechanisms.

Chapter 6–42 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

EVPN MP-IBGP Overlay: Part 2


The slide shows the automatic advertisement of MAC addresses that takes place when R27 and R28 learn Server A MAC
address in the data plane.

www.juniper.net IP Fabric Architecture • Chapter 6–43


Juniper Networks Design—Data Center

EVPN MP-IBGP Overlay: Part 3


The slide shows the importance of correct implementation of the underlay network allows for overlay traffic between server B
and A to be load balanced over the IP fabric.

Chapter 6–44 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

VXLAN Gateway Placement


The slide shows that the standard place to implement a VXLAN layer 2 gateway is on the Leaf nodes. Layer 3 GW placement
is usually in the Spine or Fabric tier but can also be found on the Leaf nodes. Currently, most Juniper Leaf nodes QFX5100,
EX4300, etc do not support Layer 3 GW functionality.

www.juniper.net IP Fabric Architecture • Chapter 6–45


Juniper Networks Design—Data Center

Support for VXLAN


The slide shows the various Juniper Networks data center products and their support for VXLAN and OVSDB. Here is a
summary of each of the features:
• Layer 2 VXLAN Gateway: Supports the Layer 2 VXLAN gateway function.
• Layer 3 VXLAN Gateway: Support the Layer 3 VXLAN gateway function (i.e., a VXLAN gateway that can route
between the subnets on different VNIs). Requires the use of IRB interfaces.
• EVPN Signaling: Support for control plane learning using EVPN.

Chapter 6–46 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

We Discussed:
• The reasons for the shift to IP fabrics;
• The design considerations for an IP fabric;
• How to scale an IP fabric; and
• The design considerations of VXLAN.

www.juniper.net IP Fabric Architecture • Chapter 6–47


Juniper Networks Design—Data Center

Review Questions
1.

2.

3.

Chapter 6–48 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

Lab: IP Fabric Architecture


The slide provides the objective for this lab.

www.juniper.net IP Fabric Architecture • Chapter 6–49


Juniper Networks Design—Data Center

Chapter 6–50 • IP Fabric Architecture www.juniper.net


Juniper Networks Design—Data Center

Chapter 7: Data Center Interconnect


Juniper Networks Design—Data Center

We Will Discuss:
• The definition of the term Data Center Interconnect (DCI);
• The differences between the different Layer 2 and Layer 3 DCIs; and
• The benefits and use cases for EVPN.

Chapter 7–2 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

DCI Overview

The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net Data Center Interconnect • Chapter 7–3


Juniper Networks Design—Data Center

DCI Types
A DCI provides connectivity between remote data center sites. A Layer 3 DCI uses IP routing between data centers. It is
assumed that each DC uses a unique set of IP addresses. A Layer 2 DCI stretches the Layer 2 network (VLANs) from one data
center to another.

Chapter 7–4 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

Transport
This slides list some of the commonly used Layer 2 and Layer 3 DCI technologies. Several technologies on the list use an
Multiprotocol Label Switching (MPLS) network to provide the connectivity between data centers including VPLS, EVPN, and
Layer 3 VPNs.

www.juniper.net Data Center Interconnect • Chapter 7–5


Juniper Networks Design—Data Center

MPLS Advantages
Many of the DCI technologies listed on the previous slide depend on an MPLS network to transport frames between data
centers. Although in most cases an MPLS network can be substituted with an IP network (i.e., by encapsulating MPLS in
GRE), there are several advantages to using an MPLS network:
1. Fast failover between MPLS nodes: Fast reroute and Node/Link protection are two features of an MPLS network
that allow for 50ms or better recovery time in the event of a link failure or node failure along the path of an
MPLS label switched path (LSP).
2. Scalable VPNs: VPLS, EVPN, L3 MPLS VPNs are DCI technologies that use MPLS to transport frames between
data centers. These same technologies allow for the interconnection of many sites (potentially hundreds)
without the need for the manual setup of a full mesh of tunnels between those sites. In most cases, adding a
new site only requires administrator to configure the devices at the new site. The remote sites do not need to be
touched.
3. Traffic engineering: MPLS allows for the administrator to decide the path takes over the MPLS network. You no
longer have to take the same path calculated by the IGP (i.e., all data takes the same path between sites). You
can literally direct different traffic types to take different paths over the MPLS network.
4. Any-to-any connectivity: When using an MPLS backbone to provide the DCI, it will allow you the flexibility to
provide any type of MPLS-based Layer 2 DCI, Layer 3 DCI, or both combinations that you choose. An MPLS
backbone is a network that can generally support most types of MPLS or IP-based connectivity at the same
time.

Chapter 7–6 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

MPLS Packet Header


MPLS is responsible for directing a flow of IP packets along a predetermined path across a network. This path is the LSP,
which is similar to an ATM VC in that it is unidirectional. That is, the traffic flows in one direction from the ingress router to an
egress router. Duplex traffic requires two LSPs—that is, one path to carry traffic in each direction. An LSP is created by the
concatenation of one or more label-switched hops that direct packets between label-switching routers (LSRs) to transit the
MPLS domain.
When an IP packet enters a label-switched path, the ingress router examines the packet and assigns it a label based on its
destination, placing a 32-bit (4-byte) label in front of the packet’s header immediately after the Layer 2 encapsulation. The
label transforms the packet from one that is forwarded based on IP addressing to one that is forwarded based on the
fixed-length label. The slide shows an example of a labeled IP packet. Note that MPLS can be used to label non-IP traffic,
such as in the case of a Layer 2 VPN.
MPLS labels can be assigned per interface or per router. The Junos operating system currently assigns MPLS label values on
a per-router basis. Thus, a label value of 10234 can only be assigned once by a given Juniper Networks router.
At egress the IP packet is restored when the MPLS label is removed as part of a pop operation. The now unlabeled packet is
routed based on a longest-match IP address lookup. In most cases, the penultimate (or second to last) router pops the label
stack in penultimate hop popping. In some cases, a labeled packet is delivered to the ultimate router—the egress LSR—when
the stack is popped, and the packet is forwarded using conventional IP routing.

www.juniper.net Data Center Interconnect • Chapter 7–7


Juniper Networks Design—Data Center

The MPLS Header (Label) Structure


The 32-bit MPLS header consists of the following four fields:
• 20-bit label: Identifies the packet to a particular LSP. This value changes as the packet flows on the LSP from LSR
to LSR.
• Class of service (CoS) (experimental): Indicates queuing priority through the network. This field was initially just the
CoS field, but lack of standard definitions and use led to the current designation of this field as experimental. In
other words, this field was always intended for CoS, but which type of CoS is still experimental. At each hop along
the way, the CoS value determines which packets receive preferential treatment within the tunnel.
• Bottom-of-stack bit: Indicates whether this MPLS packet has more than one label associated with it. The MPLS
implementation in the Junos OS supports unlimited label stack depths for transit LSR operations. At ingress up to
three labels can be pushed onto a packet. The bottom of the stack of MPLS labels is indicated by a 1 bit in this
field; a setting of 1 tells the LSR that after popping the label stack an unlabeled packet will remain.
• Time-to-live (TTL): Contains a limit on the number of router hops this MPLS packet can travel through the network.
It is decremented at each hop, and if the TTL value drops below 1, the packet is discarded. The default behavior is
to copy the value of the IP packet into this field at the ingress router.

Chapter 7–8 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

Things to Remember
The following are some of the key things to remember about working with MPLS labels:
• MPLS labels can be either assigned manually or set up by a signaling protocol running in each LSR along the path
of the LSP. Once the LSP is set up, the ingress router and all subsequent routers in the LSP do not examine the IP
routing information in the labeled packet—they use the label to look up information in their label forwarding tables.
Changing Labels by Segment
• Much as with ATM VCIs, MPLS label values change at each segment of the LSP. A single router can be part of
multiple LSPs. It can be the ingress or egress router for one or more LSPs, and it also can be a transit router of one
or more LSPs. The functions that each router supports depend on the network design.
• The LSRs replace the old label with a new label in a swap operation and then forward the packet to the next router
in the path. When the packet reaches the LSP’s egress point, it is forwarded again based on longest-match IP
forwarding.
• There is nothing unique or special about most of the label values used in MPLS. We say that labels have local
significance, meaning that a label value of 10254, for example, identifies one LSP on one router, and the same
value can identify a different LSP on another router.

www.juniper.net Data Center Interconnect • Chapter 7–9


Juniper Networks Design—Data Center

Label-Switching Routers
An LSR understands and forwards MPLS packets, which flow on, and are part of, an LSP. In addition, an LSR participates in
constructing LSPs for the portion of each LSP entering and leaving the LSR. For a particular destination, an LSR can be at
the start of an LSP, the end of an LSP, or in the middle of an LSP. An individual router can perform one, two, or all of these
roles as required for various LSPs. However, a single router cannot be both entrance and exit points for any individual LSP.

Chapter 7–10 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

Label-Switched Path
An LSP is a one-way (unidirectional) flow of traffic, carrying packets from beginning to end. Packets must enter the LSP at the
beginning (ingress) of the path, and can only exit the LSP at the end (egress). Packets cannot be injected into an LSP at an
intermediate hop.
Generally, an LSP remains within a single MPLS domain. That is, the entrance and exit of the LSP, and all routers in between,
are ultimately in control of the same administrative authority. This ensures that MPLS LSP traffic engineering is not done
haphazardly or at cross purposes but is implemented in a coordinated fashion.

www.juniper.net Data Center Interconnect • Chapter 7–11


Juniper Networks Design—Data Center

The Functions of the Ingress Router


Each router in an MPLS path performs a specific function and has a well-defined role based on whether the packet enters,
transits, or leaves the router.
At the beginning of the tunnel, the ingress router encapsulates an IP packet that will use this LSP to R6 by adding the 32-bit
MPLS shim header and the appropriate data link layer encapsulation before sending it to the first router in the path. Only one
ingress router in a path can exist, and it is always at the beginning of the path. All packets using this LSP enter the LSP at the
ingress router.
In some MPLS documents, this router is called the head-end router, or the label edge router (LER) for the LSP. In this course,
we call it simply the ingress router for this LSP.
An ingress router always performs a push function, whereby an MPLS label is added to the label stack. By definition, the
ingress router is upstream from all other routers on the LSP.
In our example, you see the packet structure. You can identify that the label number is 1000050 and the ingress router
action is to push this shim header in between the Layer 2 Frame and the IP header.

Chapter 7–12 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

The Functions of the Transit Router


An LSP might have one or more transit routers along the path from ingress router to egress router. A transit router forwards a
received MPLS packet to the next hop in the MPLS path. Zero or more transit routers in a path can exist. In a fully meshed
collection of routers forming an MPLS domain, because each ingress router is connected directly to an exit point by
definition, every LSP does not need a transit router to reach the exit point (although transit routers might still be configured,
based on traffic engineering needs).
MPLS processing at each transit point is a simple swap of one MPLS label for another. In contrast to longest-match routing
lookups, the incoming label value itself can be used as an index to a direct lookup table for MPLS forwarding, but this is
strictly an MPLS protocol implementation decision.
The MPLS protocol enforces a maximum limit of 253 transit routers in a single path because of the 8 bit TTL field.
In our example we know that the packet was sent to us with the label value of 1000050 as the previous slide indicated.
Since this is a transit router we swap out the incoming label value with the outgoing label value for the next section of the
LSP. We now see that the label has a value of 1000515.

www.juniper.net Data Center Interconnect • Chapter 7–13


Juniper Networks Design—Data Center

The Function of the Penultimate Router


The second-to-last router in the LSP often is referred to as the penultimate hop—a term that simply means second to the last.
In most cases the penultimate router performs a label pop instead of a label swap operation. This action results in the
egress router receiving an unlabeled packet that then is subjected to a normal longest-match lookup.
Penultimate-hop popping (PHP) facilitates label stacking and can improve performance on some platforms because it
eliminates the need for two lookup operations on the egress router. Juniper Networks routers perform equally well with, or
without, PHP. Label stacking makes use of multiple MPLS labels to construct tunnels within tunnels. In these cases, having
the penultimate node pop the label associated with the outer tunnel ensures that downstream nodes will be unaware of the
outer tunnel’s existence.
PHP behavior is controlled by the egress node by virtue of the label value that it assigned to the penultimate node during the
establishment of the LSP.
In our example, you can see that the MPLS header has been popped and the router is sending the packet on to the egress
router without the MPLS information.

Chapter 7–14 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

The Functions of the Egress Router


The final type of router defined in MPLS is the egress router. Packets exit the LSP at the egress router and revert to normal,
IGP-based, next-hop routing outside the MPLS domain.
At the end of an LSP, the egress router routes the packet based on the native information and forwards the packet toward its
final destination using the normal IP forwarding table. Only one egress router can exist in a path. In many cases, the use of
PHP eliminates the need for MPLS processing at the egress node.
The egress router is sometimes called the tail-end router, or LER. We do not use these terms in this course. By definition, the
egress router is located downstream from every other router on the LSP.

www.juniper.net Data Center Interconnect • Chapter 7–15


Juniper Networks Design—Data Center

Label Distribution Protocols


Label distribution protocols create and maintain the label-to-forwarding equivalence class (FEC) bindings along an LSP from
the MPLS ingress label-switching router (LSR) to the MPLS egress LSP. A label distribution protocol is a set of procedures by
which one LSR informs a peer LSR of the meaning of the labels used to forward traffic between them. MPLS uses this
information to create the forwarding tables in each LSR.
Label distribution protocols are often referred to as signaling protocols. However, label distribution is a more accurate
description of their function and is preferred in this course.
The label distribution protocols create and maintain an LSP dynamically with little or no user intervention. Once the label
distribution protocols are configured for the signaling of an LSP, the egress router of an LSP will send label (and other)
information in the upstream direction towards the ingress router based on the configured options.

Chapter 7–16 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

RSVP
The Junos OS uses RSVP as the label distribution protocol for traffic engineered LSPs.
• RSVP was designed to be the resource reservation protocol of the Internet and “provide a general facility for
creating and maintaining distributed reservation state across a set of multicast or unicast delivery paths”
(RFC 2205). Reservations are an important part of traffic engineering, so it made sense to continue to use
RSVP for this purpose rather than reinventing the wheel.
• RSVP was explicitly designed to support extensibility mechanisms by allowing it to carry what are called opaque
objects. Opaque objects make no real sense to RSVP itself but are carried with the understanding that some
adjunct protocol (such as MPLS) might find the information in these objects useful. This encourages RSVP
extensions that create and maintain distributed state for information other than pure resource reservation. The
designers believed that extensions could be developed easily to add support for explicit routes and label
distribution.
Continued on the next page.

www.juniper.net Data Center Interconnect • Chapter 7–17


Juniper Networks Design—Data Center
RSVP (contd.)
• Extensions do not make the enhanced version of RSVP incompatible with existing RSVP implementations. An
RSVP implementation can differentiate between LSP signaling and standard RSVP reservations by examining
the contents of each message.
• With the proper extensions, RSVP provides a tool that consolidates the procedures for a number of critical
signaling tasks into a single message exchange:
– Extended RSVP can establish an LSP along an explicit path that would not have been chosen by the
interior gateway protocol (IGP);
– Extended RSVP can distribute label-binding information to LSRs in the LSP;
– Extended RSVP can reserve network resources in routers comprising the LSP (the traditional role of
RSVP); and
– Extended RSVP permits an LSP to be established to carry best-effort traffic without making a specific
resource reservation.
Thus, RSVP provides MPLS-signaled LSPs with a method of support for explicit routes (“go here, then here, finally here…”),
path numbering through label assignment, and route recording (where the LSP actually goes from ingress to egress, which is
very handy information to have).
RSVP also gives MPLS LSPs a keepalive mechanism to use for visibility (“this LSP is still here and available”) and redundancy
(“this LSP appears dead…is there a secondary path configured?”).

LDP
LDP associates a set of destinations (prefixes) with each data link layer LSP. This set of destinations is called the FEC. These
destinations all share a common data LSP path egress and a common unicast routing path. LDP supports topology-driven
MPLS networks in best-effort, hop-by-hop implementations. The LDP signaling protocol always establishes LSPs that follow
the contours of the IGP’s shortest path. Traffic engineering is not possible with LDP.

Chapter 7–18 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

Protects Interfaces
Link protection is the Junos OS nomenclature for the facility backup feature defined in RFC 4090. Link protection is just one
of several methods to protect traffic as it traverses the MPLS network. The link protection feature is interface based, rather
than LSP based. The slide shows how the R2 node is protecting its interface and link to R3 through a bypass LSP.

www.juniper.net Data Center Interconnect • Chapter 7–19


Juniper Networks Design—Data Center

Node Protection
Node protection is the Junos OS nomenclature for the facility backup feature defined in RFC 4090. Node protection is
another one of several methods to protect traffic as it traverses the MPLS network. Node protection uses the same
messaging as link protection. The slide shows that R2 is protecting against the complete failure of R3 through a bypass LSP.

Chapter 7–20 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

Customer Edge Devices


CE devices are located in the DC and usually perform standard switching or routing functions. CE devices can interface to PE
routers using virtually any Layer 2 technology and routing protocol.

www.juniper.net Data Center Interconnect • Chapter 7–21


Juniper Networks Design—Data Center

Provider Edge Devices


PE devices are located at the edge of the data center or at the edge of a Service Provider’s network. They interface to the CE
routers on one side and to the IP/MPLS core routers on the other. PE devices maintain site-specific VPN route and forwarding
(VRF) tables. In a Layer 3 VPN scenario, the PE and CE routers function as routing peers (RIP, OSPF, BGP, etc), with the PE
router terminating the routing exchange between customer sites and the IP/MPLS core. In a Layer 2V PN scenario, the PE’s
CE-facing interface is configured with matching VLAN-tagging to the CE’s PE-facing interfaces and any frames received from
the CE device will be forwarding over the MPLS backbone to the remote site.
Information is exchanged between PE routers using either MP-BGP or LDP. This information exchange allows the PE routers to
map data to and from the appropriate MPLS LSPs traversing the IP/MPLS core.
PE routers, and Ingress and Egress LSRs, use MPLS LSPs when forwarding customer VPN traffic between sites. LSP tunnels
in the provider’s network separate VPN traffic in the same fashion as PVCs in a legacy ATM or Frame Relay network.

Chapter 7–22 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

Provider Routers
Provider (P) routers are located in the IP/MPLS core. These routers do not carry VPN data center routes, nor do they interface
in the VPN control and signaling planes. This is a key aspect of the RFC 4364 scalability model; only PE devices are aware of
VPN routes, and no single PE router must hold all VPN state information.
P routers are involved in the VPN forwarding plane where they act as label-switching routers (LSRs) performing label
swapping (and popping) operations.

www.juniper.net Data Center Interconnect • Chapter 7–23


Juniper Networks Design—Data Center

VPN Site
A VPN site is a collection of devices that can communicate with each other without the need to transit the IP/MPLS
backbone (i.e., a single data center). A site can range from a single location with one switch or router to a network consisting
of many geographically diverse devices.

Chapter 7–24 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

Layer 2 DCI
The slide highlights the topic we discuss next.

www.juniper.net Data Center Interconnect • Chapter 7–25


Juniper Networks Design—Data Center

Classes of Layer 2 DCIs: Part 1


There are three classes of DCIs:
1. No MAC learning by the Provider Edge (PE) device - This type of layer 2 DCI does not require that the PE devices
learn MAC addresses. Instead, the DCI is viewed by the DCs as point-to-point link. Frames come in to PE routers
on a VLAN-tagged or untagged interface from the local DC and are encapsulated in MPLS and forwarded to the
remote site. The receiving remote site PE device de-encapsulates the MPLS packet and forwards the remaining
Ethernet frame out of the local interface towards the DC. This class of DCI does not support point-to-multipoint
connectivity. For example, the slide shows this inherent issue with this class of DCI where the host in DC 3
cannot communicate with the hosts in other two DCs using VLAN 600. VLAN 600 is used strictly for
communication between DC 1 and DC 2. To provide any data center to any data center connnectivity, you must
instantiate a full mesh of the point-to-point DCIs each with their own dedicated VLAN assignment.

Chapter 7–26 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

Classes of Layer 2 DCIs: Part 2


This is a continuation of the previous slide. The second of three classes of DCIs:
2. Data plane MAC learning by the PE device - This type of DCI requires that the PE device learns the MAC
addresses (source addresses of received Ethernet frames) of both the local data center as well as the remote
data centers. The PE devices use MAC aging timers in order to keep their MAC tables current. This can be
problematic when there are 1000s of VM moves and changes in and between data centers.

www.juniper.net Data Center Interconnect • Chapter 7–27


Juniper Networks Design—Data Center

Classes of Layer 2 DCIs: Part 3


This is a continuation of the previous slide. The third of three classes of DCIs:
3. Control plane MAC learning by the PE device - EVPN is the one form of DCI that allows for MAC learning in the
control plane. MAC addresses of hosts in the local data center are learned by the PE device using standard data
plane learning. Once a locally learned MAC address is added to a PEs MAC table, the PE will advertise that
learned MAC to all of the appropriate remote PEs (program the remote PEs’ MAC tables). The remote PEs then
populate their local MAC table with the new entry. Because of this ability to advertise locally learned MAC
addresses to remote PEs, the network is more proactive in regards to VM moves and changes and also, the
flooding of BUM traffic is minimized.

Chapter 7–28 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

CCC
Circuit Cross Connect (CCC) is the “static routing” of DCIs. For each CCC connection, an administrator must map DC-facing
interface (or subinterface) to both an outbound MPLS LSP (for sending data to remote DCs) and an inbound MPLS LSP (for
receiving data from remote DCs). This was the original method of creating layer 2 VPNs using Juniper Networks devices. This
method works fine but it does have one glaring issue. CCC does not use MPLS label stacking (2 or more MPLS labels stacked
on top of the Ethernet frame) to multiplex different types of data over an LSP. Instead, only one MPLS label is stacked on top
of the original Ethernet frame. Therefore, the LSPs used for a particular CCC are dedicated for that purpose and cannot be
reused for any other purpose. For every CCC your must create 2 MPLS LSPs. If you wanted to create a CCC for two VLANs
between the same two sites, you would have to instantiate 4 MPLS LSPs between the same to PE devices.
Notice that the PE devices do not have to learn MAC addresses to forward data. The simply use the mapping of the CE-facing
interface to the inbound and outbound MPLS LSP to forward data.

www.juniper.net Data Center Interconnect • Chapter 7–29


Juniper Networks Design—Data Center

BGP Layer 2 VPNs


With BGP Layer 2 VPNs the VPNs are created using bidirectional MPLS LSPs, similar to CCC. However, instead of mapping
the LSPs to an interface on the PE devices, the LSPs are automatically mapped to Layer 2 VPN using MP-BGP
advertisements. Data is forwarded using a two-level label stack that permits the sharing of the LSP by numerous Layer 2
connections. The outer label delivers the data across the core of the network from the ingress PE device to the egress PE
device. The egress PE device then uses the inner label to map the Layer 2 data to a particular VPN-specific table.
The use of label stacking improves scalability, as now multiple VPNs can share a single set of LSPs for the forwarding of
traffic. Also, by over-provisioning Layer 2 VPNs on the PE device, adds and changes are simplified, as only the new DC’s PE
router requires configuration. This automatic connection to LSP mapping greatly simplifies operations when compared to the
CCC approach.
Because the provider delivers Layer 2 VPNs to the customer, the routing/switching for the customer’s private network is
entirely in the hands of the customer. From the perspective of the attached CE device, there is no operational difference
between a CCC and a BGP Layer 2 VPN solution.
Notice that the PE devices do not have to learn MAC addresses to forward data. They simply use the mapping of the
CE-facing-interface, inner label, and the inbound and outbound MPLS LSPs (outer label) to forward data.

Chapter 7–30 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

LDP Layer 2 Circuits


With LDP Layer 2 circuits, the circuits are created using bidirectional MPLS LSPs, like CCC. However, instead of manually
mapping the LSPs to an interface on the PE devices, using LDP the LSPs are mapped to a VPN-specific forwarding table
(similar to BGP Layer 2 VPNs). This table then maps the data to a Layer 2 circuit. The LDP Layer 2 circuit approach also
makes use of stacked MPLS labels for improved scalability.
Label stacking means that PE devices can use a single set of MPLS LSPs between them to support many VPNs. The LDP
Layer 2 circuit signaling approach does not support the auto-provisioning of Layer 2 connections, and it relies on LDP for
signaling.
The LDP Layer 2 circuit approach requires configuration on all PE routers involved in the VPN when moves, adds, and
changes occur.
Because the MPLS core delivers Layer 2 circuits to the DC, the routing/switching for the DC is entirely in the hands the DC
administrators.
Notice that the PE devices do not have to learn MAC addresses to forward data. They simply use the mapping of the
CE-facing-interface, inner label, and the inbound and outbound MPLS LSPs (outer label) to forward data.

www.juniper.net Data Center Interconnect • Chapter 7–31


Juniper Networks Design—Data Center

Data Plane MAC Learning on the PE Devices


The previous slides discussed using point-to-point VPNs to connect DCs. Using those technologies, a particular DC would
need to dedicate a unique VLAN to connect to each remote site. What if you wanted to use the same VLAN to connect to all
remote sites? What if the network acted like a switch such that a host in one DC could automatically learn the MAC address
of any remote DC host? Data plane MAC learning DCIs are one class of DCI that behave in this manner. PE devices of this DCI
class rely on learning the MAC addresses of the local and remote hosts by looking at the source MAC address of received
frames. MAC addresses are learned one by one on an individual PE and those learned addresses are not synchronized
between all PEs that belong to the same DCI. This fact, along with the fact that each PE maintains its own MAC table (with
aging timer, etc.) means that the MAC table of the PEs of a DCI might not match causing undeterministic behavior. This
behavior is especially magnified in a scenario when hosts are moving between DCs.

Chapter 7–32 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

VPLS
To the DC, a VPLS DCI appears to be a single LAN segment. In fact, it appears to act similarly to a learning bridge. That is,
when the destination media access control (MAC) address is not known, an Ethernet frame is sent to all remote sites. If the
destination MAC address is known, it is sent directly to the site that owns it. The Junos OS supports two variations of VPLS,
BGP signaled VPLS and LDP signaled VPLS.
In VPLS, PE devices learn MAC addresses from the frames that it receives. They use the source and destination addresses to
dynamically create a forwarding table (vpn-name.vpls) for Ethernet frames. Based on this table, frames are forwarded out of
directly connected interfaces or over an MPLS LSP across the provider core. This behavior allows an administrator to not
have to manually map Layer 2 circuits to remote sites.

www.juniper.net Data Center Interconnect • Chapter 7–33


Juniper Networks Design—Data Center

Control Plane MAC Learning


The slide list some of the highlights of control plane MAC learning.

Chapter 7–34 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

EVPN
Similar to VPLS, to CE devices the MPLS network appears to function as a single broadcast domain per VLAN. The network
acts similar to a learning bridge. PE devices learn MAC addresses from received Ethernet frames from the locally attached
DC. Once a local MAC is learned, the MAC is advertised to remote PE devices using EVPN MP-BGP NLRI. The remote PEs, in
turn, map the BGP learned MAC address to outbound MPLS LSPs. The synchronizing of learned MAC addresses minimizes
the flooding of BUM traffic over the MPLS network.

www.juniper.net Data Center Interconnect • Chapter 7–35


Juniper Networks Design—Data Center

Layer 2 DCI Redundancy


The slide show some example Layer 2 DCI redundancy scenarios. You should note that any VPLS scenario supports only
Active/Passive forwarding over the redundant links. This is necessary because of the non-synchronous nature of data plane
learning of MAC addresses. EVPN is the only DCI that supports Active/Active forwarding.

Chapter 7–36 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

EVPN Use Cases


The slide highlights the topic we discuss next.

www.juniper.net Data Center Interconnect • Chapter 7–37


Juniper Networks Design—Data Center

Why EVPN?
Service providers are interested in EVPN because it improves their service offerings by allowing them to replace VPLS with a
newer, more efficient technology. Data center builders are interested in using EVPN because of its proactive nature of
learning MAC addresses.

Chapter 7–38 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

Next Generation Ethernet Services


EVPN will allow a Service Provider to offer a more efficient form of E-LAN and E-LINE services. It allows the Service Provider
to have Layer 3 VPN-like policy control. For example, because MAC addresses are advertised in MP-BGP routers, routing
policy can be use to block certain MAC addresses from being advertised to remote sites. In terms of high availability, EVPN
allows CE to have multiple, active connections into the Service Provider’s network. Also, because of EVPN’s control plane
MAC learning, the flooding of BUM traffic is minimized.

www.juniper.net Data Center Interconnect • Chapter 7–39


Juniper Networks Design—Data Center

Overlay Network in a Single DC


Overlay networks are beginning to emerge in the DC. An overlay technology, like VXLAN, allows Ethernet frames to be
tunneled inside IP packets. In the slide, it is the vSwitch acting as a VXLAN Tunnel Endpoint (VTEP) that encapsulates
Ethernet frames from the VMs into VXLAN/IP packets. Because the DC infrastructure must be able to forward the IP packets,
the DC no longer needs to be a Layer 2 network. Instead, the DC can be built around an IP network of routers. The VXLAN
specification requires a VTEP to perform the following:
1. Learn local and remote MAC addresses;
2. Learn the IP address of interested VTEPs (VTEPs that are servicing similar VLANs); and
3. Forward BUM traffic to the interested VTEPs.
In the base specification of VXLAN, these requirements are solved (particularly 2 and 3) by enabling multicast in the DC
network using Protocol Independent Sparse Mode (PIM-SM). Interested VTEPs will join an administratively assigned group
that has be associated with a particular VXLAN network. This works fine, but like VPLS, MAC learning is reactive and
unsynchronized between VTEPs. EVPN signaling can be used to solve all three requirements of a VTEP.

Chapter 7–40 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

EVPN for DCI


The slide shows a scenario where a customer has an all layer 2 DC and another DC that uses a VXLAN overlay. The slide
shows how EVPN can be used as a DCI between those two DCs. Remember that Ethernet frames from the hosts on the left
of the slide have been encapsulated in VXLAN/IP packets. By the time the data reaches the hosts on the right side of the
slide, the VXLAN/IP packets must have been de-encapsulated back into the original Ethernet frame. The device that
performs this function is a VXLAN Layer 2 gateway. Generally, it will be a device on the edge of the VXLAN overlay network
that will perform the VXLAN gateway function. Any one of the MX Series devices could be a VXLAN gateway in this scenario.

www.juniper.net Data Center Interconnect • Chapter 7–41


Juniper Networks Design—Data Center

Over the Top DCI


The slide shows how EVPN signaling can support a DCI without the use of MPLS. In this case the customer CEs act VXLAN
Layer 2 gateways. Between CEs, Ethernet frames are encapsulated in VXLAN/IP packets and forwarded over a plain IP
service or private backbone. EVPN is used to synchronize MAC addresses between DCs.

Chapter 7–42 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

EVPN DCI Option


This slide shows the 4 options for an EVPN DCI. The next few slides discuss each option in detail.

www.juniper.net Data Center Interconnect • Chapter 7–43


Juniper Networks Design—Data Center

OTT of L3VPN
The slide shows an example of the signaling and data plane when using EVPN/VXLAN over a Layer 3 VPN. The two MX Series
devices represent the PE devices for the Layer 3 VPN. The layer 3 VPN can be over a private MPLS network or could be a
purchased Service Provider service. From the QFX perspectives, they are separated by an IP network. The QFXs simply
forward VXLAN packets between each other based on the MAC addresses learned through EVPN signaling. The MX devices
have an MPLS layer 3 VPN between each other (Bidirectional MPLS LSPs, IGP, L3 VPN MP-BGP routing, etc). The MXs
advertise the local QFX’s loopback address to the other MX.
When forwarding data from West to East, QFX1 takes a locally received Ethernet frame and encapsulates it in a VXLAN
packet destined to QFX2’s loopback address. MX1 performs a lookup for the received packet on the VRF table associated
with the VPN interface (the incoming interface) and encapsulates the VXLAN packet into two MPLS headers (outer for MPLS
LSP, inner for MX2 VRF mapping). Upon receiving the MPLS encapsulated packet, MX2 uses the inner MPLS header to
determine the VRF table so that it can route the remaining VXLAN packet to QFX2. QFX2 strips the VXLAN encapsulation and
forwards the original Ethernet frame to the destination host.

Chapter 7–44 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

EVPN Stitching over MPLS Network


The slide shows an example of the signaling and data plane when using EVPN stitching between three EVPNs; EVPN between
QFX1 and MX1, EVPN between MX1 and MX2, and an EVPN between MX2 and QFX2. Each EVPN is signaled using EVPN
MP-BGP signaling and are stitched together on the MX devices using logical tunnel interfaces.
When forwarding data from West to East, QFX1 takes a locally received Ethernet frame and encapsulates it in a VXLAN
packet destined to MX1’s loopback address. MX1 strips the VXLAN encapsulation and forwards the remaining Ethernet
frame out of a logical tunnel interface. MX1 receives the Ethernet frame over the associated (looped) logical tunnel
interface. MX1 takes the locally received Ethernet frame and encapsulates it in two MPLS header (outer for MPLS LSP, inner
for MX2 VRF mapping). Upon receiving the MPLS encapsulated packet, MX2 uses the inner MPLS header to determine the
outgoing interface. MX2 forwards the remaining Ethernet frame out of a logical tunnel interface. MX2 receives the Ethernet
frame over the associated (looped) logical tunnel interface. MX2 takes the locally received Ethernet frame and encapsulates
it in a VXLAN packet destined to QFX2’s loopback address. QFX2 strips the VXLAN encapsulation and forwards the remaining
Ethernet frame to the destination host.

www.juniper.net Data Center Interconnect • Chapter 7–45


Juniper Networks Design—Data Center

EVPN Stitching between private and public IP Networks


The slide shows an example of the signaling and data plane when using EVPN stitching between three EVPNs; EVPN between
QFX1 and MX1, EVPN between MX1 and MX2, and an EVPN between MX2 and QFX2. Each EVPN is signaled using EVPN
MP-BGP signaling and are stitched together on the MX devices using logical tunnel interfaces.
When forwarding data from West to East, QFX1 takes a locally received Ethernet frame and encapsulates it in a VXLAN
packet destined to MX1’s loopback address. MX1 strips the VXLAN encapsulation and forwards the remaining Ethernet
frame out of a logical tunnel interface. MX1 receives the Ethernet frame over the associated (looped) logical tunnel
interface. MX1 takes the locally received Ethernet frame and encapsulates it in a VXLAN packet destined to MX2’s loopback
address. MX2 strips the VXLAN encapsulation and forwards the remaining Ethernet frame out of a logical tunnel interface.
MX2 receives the Ethernet frame over the associated (looped) logical tunnel interface. MX2 takes the locally received
Ethernet frame and encapsulates it in a VXLAN packet destined to QFX2’s loopback address. QFX2 strips the VXLAN
encapsulation and forwards the remaining Ethernet frame to the destination host.

Chapter 7–46 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

EVPN over IP
The slide shows an example of the signaling and data plane when using EVPN over an IP network. EVPN MP-BGP is used to
synchronize MAC tables.
When forwarding data from West to East, QFX1 takes a locally received Ethernet frame and encapsulates it in a VXLAN
packet destined to MX1’s loopback address. QFX2 strips the VXLAN encapsulation and forwards the remaining Ethernet
frame to the destination host.

www.juniper.net Data Center Interconnect • Chapter 7–47


Juniper Networks Design—Data Center

Layer 3 DCI
The slide highlights the topic we discuss next.

Chapter 7–48 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

Layer 3 DCI
A layer 3 DCI uses routing to interconnect DCs. Each data center must maintain a unique IP address space such that there is
no overlap. A DCI can be established using just about any IP carrying link including a point-to-point IP link, Layer 3 MPLS VPN,
and IPSec VPN, or a GRE tunnel. Also using standard routing, multiple redundant connections are possible between DCs.

www.juniper.net Data Center Interconnect • Chapter 7–49


Juniper Networks Design—Data Center

Layer 3 VPNs
The Junos OS supports Layer 3 provider-provisioned VPNs based on RFC 4364. In this model, the provider edge (PE) routers
maintain VPN-specific routing tables called VPN route and forwarding (VRF) tables for each of their directly connected VPNs.
To populate these forwarding tables, the CE routers advertise routes to the PE routers using conventional routing protocols
like RIP, OSPF and EBGP.
The PE routers then advertise these routes to other PE routers with Multiprotocol Border Gateway Protocol (MP-BGP) using
extended communities to differentiate traffic from different VPN sites. Traffic forwarded from one VPN site to another is
tunneled across the network using MPLS.

Chapter 7–50 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

Layer 3 DCI Redundancy


There are many options for Layer 3 redundancy. The slides shows a few of those options. The first diagrams shows a LAG
between data centers using 2 or more physical links. The second options shows the use of a routing protocol’s load
balancing ability over equal cost paths (OSPF, EBGP, etc.)

www.juniper.net Data Center Interconnect • Chapter 7–51


Juniper Networks Design—Data Center

We Discussed:
• The definition of the term DCI;
• The differences between the different Layer 2 and Layer 3 DCIs; and
• The benefits and use cases for EVPN.

Chapter 7–52 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center

Review Questions:
1.

2.

3.

www.juniper.net Data Center Interconnect • Chapter 7–53


Juniper Networks Design—Data Center

Lab: Data Center Interconnect


The slide provides the objectives for this lab.

Chapter 7–54 • Data Center Interconnect www.juniper.net


Juniper Networks Design—Data Center
Answers to Review Questions
1.
There are three classes of Layer 2 DCIs; no MAC learning, data plane MAC learning, and control plane MAC learning.
2.
EVPN allows for the proactive and synchronized (between PEs) update to MAC learning table. Also, EVPN allows for a DC to have
multiple active connection to the DCI for both load balancing and redundancy purposes.
3.
Some of the use cases for EVPN include next generation Ethernet services, .EVPN-VXLAN for DC Overlay, EVPN for DCI, and
over the top Layer 2 VPN over IP.

www.juniper.net Data Center Interconnect • Chapter 7–55


Juniper Networks Design—Data Center

Chapter 7–56 • Data Center Interconnect www.juniper.net


Acronym List
ACL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .access control list
AD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . aggregation device
ADC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Application Delivery Controller
ALE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .annualized loss expectancy
API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .application programming interface
ARO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . annual rate of occurrence
ARP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Address Resolution Protocol
ASIC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .application-specific integrated circuit
BFD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bidirectional Forwarding Detection
BGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Border Gateway Protocol
BMaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bare Metal as a Service
BMS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bare metal server
BoR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bottom-of-rack
BPDU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bridge protocol data unit
BUM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . broadcast, unknown unicast, and multicast
C&C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Command and Control
CapEx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .capital expenditures
CCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circuit cross connect
CEE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Converged Enhanced Ethernet
CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .command-line interface
CNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Converged Network Adapter
CoS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class of service
CSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Control and Status Protocol
CSV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . comma-separated variable
CWDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coarse wavelength division multiplexing
DCB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . data center bridging
DCBX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Center Bridging Capability Exchange
DCI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . data center interconnect
DDoS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . distributed denial of service
DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . development and operations
DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Domain Name System
DPI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Deep Packet Inspection
DSCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .DiffServ code point
ECMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . equal-cost multipath
ECR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Energy Consumption Rating
EER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . energy efficiency ratio
EF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .exposure factor
EOL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . end-of-life
EoR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . end-of-row
EOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . end-of-support
EPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . events per second
ES-IS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . End System–to–Intermediate System
ETS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . enhanced transmission selection
ETSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . European Telecommunications Standards Institute
EVPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethernet VPN
FC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fibre Channel
FCAPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . fault, configuration, accounting, performance, security
FCF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fibre Channel forwarder
FCID. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Fibre Channel ID
FCoE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fibre Channel over Ethernet
FCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Frame Check Sequence
FEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . forwarding equivalence class
FIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FCoE initiation protocol
FMPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . fault monitoring and performance monitoring
FOUO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . For Official Use Only

www.juniper.net Acronym List • ACR–1


FSPF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . fabric shortest path first
GRE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .generic routing encapsulation
GRES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . graceful Routing Engine switchover
GSLB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Global Server Load Balancing
GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .graphical user interface
HA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .high availability
HA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .high-availability
HR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Human Resources
HTTPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hypertext Transfer Protocol over Secure Sockets Layer
HVAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . heating ventilation and cooling
IaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Infrastructure as a Service
IBGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . internal BGP
ICCP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Interchassis Control Protocol
ICL-PL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .interchassis link-protection link
ICMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internet Control Message Protocol
ICT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Information and Communications Technologies
IDS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . intrusion detection service
IEEE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Institute of Electrical and Electronics Engineers
IETF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internet Engineering Task Force
IGMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internet Group Membership Protocol
IGP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . interior gateway protocol
INE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . independent network element
IPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . intrusion prevention system
IPsec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IP Security
IRB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integrated routing and bridging
iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internet Small Computer System Interface
IS-IS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intermediate System-to-Intermediate System
JNCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juniper Networks Certification Program
JSA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juniper Networks Secure Analytics
ksyncd. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .kernel synchronization
L2CP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Layer 2 Control Protocol
l2cpd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Layer 2 Control Protocol process
L3VPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Layer 3 VPN
LACP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Link Aggregation Control Protocol
LAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . link aggregation group
LAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . local area network
LCD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lowest common denominator
LER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . label edge router
LLDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Link Layer Discovery Protocol
LLDP-MED. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Link Layer Discovery Protocol–Media Endpoint Discovery
LSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . link-state advertisement
LSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . label switched path
LSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .label-switching router
LSYS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . logical systems
MAC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . media access control
MC-LAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichassis link aggregation group
MDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Main Distribution Area
MDT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multicast distribution trees
mgd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . management process
MMF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multimode fiber
MoR. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .middle-of-rack
MP-BGP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Multiprotocol Border Gateway Protocol
MPLS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multiprotocol Label Switching
MPO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multifiber push-on
MSTP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multiple Spanning Tree Protocol
MTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . mechanical transfer pull-off
NAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . networked attached storage
NAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Address Translation
NBAD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . network anomaly behavior detection
NETCONF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Configuration Protocol

ACR–2 • Acronym List www.juniper.net


NFV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Functions Virtualization
NGFW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . next-generation firewall
NH-Trunks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Next-Hop Trunks
NIC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .network interface card
NOC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .network operations center
NPIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N_Port ID virtualization
NSB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .nonstop bridging
NSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . nonstop active routing
NVO3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Virtualization Overlays
NWWN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . node worldwide name
OOB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . out-of-band
OpEx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .operational expenditures
OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . operating system
OVSDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Open vSwitch Database
PE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . provider edge
PFC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . priority-based flow control
PFE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Packet Forwarding Engine
PHP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .penultimate-hop popping
PIM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Protocol Independent Multicast
PIM-SM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Protocol Independent Multicast sparse mode
PWWN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . port worldwide name
QCN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .quantized congestion notification
QoE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . quality of experience
QSFP+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . quad small form-factor pluggable plus
RD-Trunks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remote Destination Trunks
RE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Routing Engine
REST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . representational state transfer
RIPng. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RIP next generation
RP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rendezvous point
RPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . remote procedure call
rpd. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . routing protocol process
RPT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rendezvous point tree
RSTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Rapid Spanning Tree Protocol
RTG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Redundant Trunk Group
SaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Software as a Service
SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . storage area network
SBU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sensitive but Unclassified
SCSI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Small Computer System Interface
SD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . satellite device
SDK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . software development kit
SDN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . software-defined network
SEA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Syrian Electronic Army
SFC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . switch-fabric chassis
SFP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .small form-factor pluggable
SFP+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .small form-factor pluggable plus
SLAX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stylesheet Language Alternative Syntax
SLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . single loss expectancy
SMF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . single mode fiber
SSHv2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Secure Shell
SSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Secure Sockets Layer
STP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spanning Tree Protocol
TCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . translational cross-connect
TCN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . topology change notification
TG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . trunk group
ToR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .top-of-rack
TTL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . time to live
UDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .User Datagram Protocol
UFOUO. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Unclassified—For Official Use Only
ULES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Unclassified – Law Enforcement Sensitive
URL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Uniform Resource Locator

www.juniper.net Acronym List • ACR–3


VAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Value Added Services
VC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Chassis
VCCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Chassis Control Protocol
VCF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Chassis Fabric
VCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Chassis port
VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .virtual LAN
VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .virtual machine
vme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual management Ethernet
vMX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual MX
VNI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual network identifier
VoIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . voice over IP
VPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Virtual Private Cloud
VPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual private network
VR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual router
VRF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual routing and forwarding
VRRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual router redundancy protocol
vSRX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual SRX
VTEP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual tunnel endpoint
vty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual terminal
VXLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual eXtensible local area network
WDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . wavelength-division multiplexing
WWN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .World Wide Name
XaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Everything as a Service
XFP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . small form-factor pluggable transceivers
XMPP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Extensible Messaging and Presence Protocol
XSLT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Extensible Stylesheet Language Transformations

ACR–4 • Acronym List www.juniper.net

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy