EX QFX Switch Refresh RFP PDF
EX QFX Switch Refresh RFP PDF
Sally Stevens
Friday 14th March 2015
Thank you for the opportunity to submit this non-binding (other than pricing for now-available
products listed in our quotes), subject to contract, proprietary and confidential proposal for your
consideration
Trademarks
Juniper Networks, the Juniper Networks logo, Junos, NetScreen and ScreenOS are registered
trademarks of Juniper Networks, Inc. in the United States and other countries. All other
trademarks, service marks, registered trademarks or registered service marks are the property of
their respective holders.
This RFI contains information relating to Juniper Networks’ development plans and plans for future
products, features or enhancements (“SOPD”). SOPD information is subject to change at any
time, without notice. Except as may be set forth in definitive agreements for the potential
transaction, Juniper Networks provides no assurances, and assumes no responsibility, that such
future products, features or enhancements will be introduced. Therefore, XYZ, INC. should
ensure that purchasing decisions:
a) are not being made based upon reliance of timeframes or specifics outlined in the SOPD;
and
b) would not be affected if Juniper Networks delays or never introduces the future products,
features or enhancements.
RFP Contacts
Name Sally Stevens
Title Account Director
Telephone 07918 947032
Email salstevens@juniper.net
Contents
RFP Contacts ........................................................................................................................... 6
Executive Summary ................................................................................................................ 8
Partnership ............................................................................................................................. 10
Technical Summary ............................................................................................................... 11
Technical Specifications ....................................................................................................... 13
Platform Roadmap .............................................................................................. 13
Physical Requirements ........................................................................................ 19
Future enhancements ..................................................................................................... 37
Data Centre Interconnect conclusion .............................................................................. 37
Transition Plan ....................................................................................................................... 38
Legacy to Point-of-Proof migration ...................................................................... 38
VLAN Harmonisation .................................................................................................... 47
Feature and Service Support ............................................................................... 47
Data Centre Operations’ Standards ..................................................................... 50
Virtualization, Consolidation, Expansion and Cost Innovation ............................. 51
ILNP – (Identifier-Locator Network Protocol) Juniper are considering as an alternative
standard to Cisco LISP. ................................................................................................ 57
Summary ............................................................................................................. 57
Sales Services .................................................................................................... 60
Product Introduction ............................................................................................ 62
Data Centre Network Transformation Phases .................................................................... 65
Project Management ......................................................................................... 71
Environmental Requirements .............................................................................. 73
Additional Requirements ..................................................................................... 74
Maintenance........................................................................................................ 75
References .......................................................................................................... 75
Supporting Material ............................................................................................. 76
Commercial Offering ............................................................................................................. 78
Legal ....................................................................................................................................... 79
Appendix One - Appendices................................................................................................. 80
Executive Summary
Juniper Networks is pleased to present to XYZ, Inc. our Data Centre solution.
We offer a proven solution such that XYZ, Inc. can deploy a cost-effective family of switches that
delivers the high availability, unified communications, integrated security and operational
excellence which you need today, whilst also providing a platform for supporting your
requirements in the future.
Working with our authorized partners, Juniper has a broad, deep and successful track record in
delivering Data Centre technology that is easy to deploy and manage which is both reliable and
cost effective, along with software and services to manage the network in a virtualized data
centre environment.
Juniper Networks has a strong footprint and track record within the Public Sector in the UK.
Several Central Government Departments utilise Juniper security solutions, as well as our
switching portfolio. We have PSN customers, for example Regional Government, as well as a
large percentage of Higher Education facilities in the UK. DFTS,
Janet and Dante, all run on secure Juniper Networks.
The Juniper switch solution offers an innovative alternative to the cost and complexity of
managing a legacy network.
Our solution will help lower your total cost of ownership through a flatter design, with a single
Networking OS, common management structure and will reduce the space, power and cooling
requirements.
Juniper solutions are designed to deliver scalable port density and performance, providing XYZ,
Inc. with an economical pay-as-you-grow approach to building your flexible and high performance
network.
The proposed solution for the DC Network Refresh provides the following key business benefits:-
• Cost Performance - Juniper is a pure play organisation with significant investment back
into R & D, delivering cutting edge products at a lower cost of ownership and
procurement cost.
Thank you for the opportunity to respond. Juniper has a dedicated team available to meet with
you at your earliest convenience to discuss our proposal and answer any questions which you
may have.
Partnership
Juniper Networks Partner Advantage Program
Juniper Networks’ go-to-market model puts a deliberate dependency on our Authorised Partner
Community. Juniper Partners are backed up by Juniper Partner Advantage, a partner program
designed to ensure that Partners are rewarded, trained and certified to sell, design and support
Juniper products and solutions.
For the purpose of this RFP Juniper is partnering with ACME to deliver the UK Pricing schedule,
ACME is also XYZ, Inc.’s Distribution Partner, so we trust that this is the best possible
arrangement.
Juniper also has a strong partnership with XYZ, Inc. XYZ, Inc. is a Juniper Elite Portfolio and
Services partner across the complete Juniper portfolio. XYZ, Inc. can deliver and support the
entire Juniper product suite, from High End Routing, Routing, Switching, Firewalls, Secure
Routers, and Remote Access solutions. XYZ, Inc. has many Juniper certified Pre and Post
Sales Engineers and also takes part in our Ingenious Champion Program, being one of the very
few Juniper partners globally to have multiple Ingenious champions (subject matter experts in
Juniper technologies).
In addition to the delivery of Juniper solutions to a wide range of Government and Enterprise
customers (including Home Office, Centrica, G4S and Marks and Spencer), XYZ, Inc. Services
also run the XYZ, Inc. Core Network on a Juniper platform and utilise Juniper technologies
throughout its business. XYZ, Inc. is also one of Juniper’s selected EMEA marketing partners,
and is invited to participate in the Juniper Partner Advisory Council. In support of XYZ, Inc., we
have assigned a full time Partner Manager and Technical Account Manager.
Technical Summary
The Juniper Networks technology solution offers a reliable, repeatable, secure and scalable
network that reduces the current physical footprint and meets XYZ, Inc.’s requirements for today,
whilst scaling to meet XYZ, Inc.’s requirements in the future.
The proposed solution for the DC Network provides XYZ, Inc. with the following key technical
benefits:
• Scalability - Our solution is scalable from one virtual chassis fabric of two switches
to 32 switches per Virtual Chassis Fabric (VCF) , which can be replicated into a
series of VCF’s to meet the forthcoming implementation of virtual servers, also
allowing implementation of an elastic service to meet changing requirements
without the need for large upfront capital costs.
• Reliability - Our solution leverages much of the same field-proven Juniper Carrier
technology―including high performance ASICs, system architecture and Junos
software―that powers the world’s largest service provider networks. The result is a
robust, time-tested and highly reliable infrastructure solution for high performance
networking.
• Data Centre Savings – By utilizing switches that are 1RU in size, the Juniper
solution reduces space and power consumption but increases the scale and size of
the services that can be supported and developed. * See Appendix one for data
• Interoperability – Our solution is fully standards compliant and, due to our Open
API’s, can work with legacy hardware and other vendors as required to support
migration from your current estate. We will fully support and plan your migration in
an incremental fashion. We also ensure complete configuration and testing prior to
deployment.
• Longevity of spares and support -
The hardware proposed is new to the market
and, as a result, XYZ, Inc. can expect the longest anticipated lifetime, as well as
support for 5 years from the announcement of any end of sale date. Juniper also
has a culture of improvement through software evolution.
All of these elements contribute to a simple cost efficient solution that provides XYZ, Inc. with a
solid foundation for the development of faster applications running on high speed virtual
platforms, to enable you to provide an excellent service to your Customer.
Technical Specifications
Platform Roadmap
Table 13.1: Required platform roadmap detail.
Ref Requirement Weighting
R001 Please provide a platform roadmap to cover a minimum 5 year period from Mandatory
[6.2.1] the middle of 2015 till the end of 2019. The platform(s) should be available as
general release by Q2 calendar year 2015. New hardware introduced during
this period can be included if it adheres to requirement R004. It is expected
that the roadmap will include platforms capable of fulfilling scalability and
availability not less than that achievable with the current Cisco Catalyst
switches (combination of 6500-series and 3750G-series).
Juniper Juniper Networks has proposed three switches for the Server Access and
Core layer within our proposal. These include the EX4300-48T to provide
100MB and 1GbE RJ45 for existing server connectivity and the QFX5100-
48S to provide both 1GbE and 10GbE Fibre based connectivity and the
EX9200 to provide core 10GbE, 40GbE and 100GbE connectivity whilst also
providing EVPN and MPLS support for site-to-site connectivity.
The roadmap on the items specified in our response is listed below. At this
time, Statements of Product Directions (SOPD) are only issued for a 12-
month period for QFX based and EX based products.
More specific SOPD information can be obtained from our Product Line
Management Team who would be happy to attend a meeting in relation to
this bid. (Please refer to our SOPD statement within the Juniper
Confidentiality Notice)
Infrastructure
IPv4
IPv6
Metro Ethernet
• QinQ support
• MPLS
• RSVP auto-bandwidth
Routing Protocols
Switching
• Metafabric 1.1
System Management
• ZTP
Virtual Chassis
VLAN Infrastructure
• VC Local Bias
• G.8032v1 and v2
Layer 2 Features
• VXLAN L2 Gateway
MPLS
• SNMPv3 support
• Operation, Administration, and Maintenance (OAM)
• 802.3ah support
Switching
• Metafabric 2.0
Virtual Chassis
• 32 member VCF
• ISSU support on VCF
IPv6
• BGP
Management
Metro Ethernet
Multicast
Switching
• Campus 1.0
• Metafabric 1.0
• Metafabric 1.1
System Management
TBD
• Puppet Agent
Virtual Chassis
IPv6
Layer 2 Features
• L2PT
Metro Ethernet
• Private VLAN
• PVLAN Support within a switch and across switches
Port Security
Switching
• Campus 1.1
VLAN Infrastructure
R002 Please indicate your End of Sale / Support lifecycle policy for device Mandatory
[6.2.2] hardware, particularly regarding the timeline and support availability between
End of Sale announcement and the end of product life.
Juniper Juniper End of Sale (EoS) policy for hardware devices is communicated
through the support website and directly via e-mail to the e-mail address or
support contract holder. When a product reaches its end of life (EOL), Juniper
is committed to communicating important milestones throughout the EOL
period, including the initial EOL notification, Last Order Date (LOD) for
product, End of Support (EOS) EOS milestone dates, as well as other key
information pertaining to Juniper hardware and software products. Any
product being discontinued will be announced as EOL a minimum of one
hundred-eighty (180) days prior to the discontinuation and end of sale date,
also referred to as last order date. On the last order date, products are
removed from the price list and are no longer available for purchase.
Last day of support for both hardware and software is five years after the End
of Sale date. Up to this point hardware and software will be supported under
the existing contract in place prior to the End of Sale announcement.
An example of the EOL-EOS timetable can be seen below. Please note that
this example is not specific to the technology proposed within this RFP.
As and when XYZ, Inc. requires a new feature in forthcoming releases, a beta
version of that code would be released to XYZ, Inc. to test in their lab with the
results passed back to our development team. The development team would
make any changes that XYZ, Inc. have noted and then issue either as
standard code or as a special release. Juniper would suggest that XYZ, Inc.
sign up to the beta release programme and Junos development programme
so they can work with Juniper on new features.
R005 Please include major functionality introduced during the course of the Medium
[6.2.5] roadmap, this should be divided into hardware and software/operating system
specific functionality. Examples of this would include: enhancements to
product uptime, throughput, availability and support for new services &
protocols.
Juniper Please refer to Question R001 for a full list of roadmap or Statement of
Product direction (SOPD) information which includes both Hardware and
Software specific functions.
R006 Please provide any published benchmarks for the platforms in your roadmap. Medium
[6.2.6]
Juniper Please refer to Question R001 for a full list of roadmap or Statement of
Product direction (SOPD) on the platform proposed in our solution. No
internal or external published benchmark information is available in regards to
SOPD statement.
Physical Requirements
Appendix 3 details a high-level view of the current data centre networks. It also contains key
requirements which should be addressed in the supplier’s response.
Suppliers should be innovative in the introduction of new infrastructure in the data centres where
rack space and power is at a premium.
_____________________________________________________________________
Juniper: Read and Understood
Solution Overview
Juniper proposes a two-tier architecture utilising the new EX4300 and
QFX5100 to provide both copper and Fibre connectivity for 1GbE and 10GbE
server connections at the access/aggregation layer. Utilising a spine and leaf
approach the spine switches would connect to a core/WAN layer of EX9200.
Each tier within the solution is then virtualised using Juniper’s unique Virtual
Chassis (VC) technology to allow the solution to be deployed in manageable
VC clusters, providing additional scale when required, retaining ease of
management and minimizing space and power requirements.
To complement the QFX and provide connectivity for existing servers the
EX4300 provides 48 ports of 10/100/1000Mbps RJ45 copper connectivity
with 4 x 1/10GbE front facing uplink ports and a further 4 x 40GbE port on the
rear. Again like the QFX5100-48S, the EX4300 has dual power supplies and
fan trays, which are all hot swappable.
It’s at this point that we can introduce Virtual Chassis in to the solution.
Juniper Networks unique Virtual Chassis technology, enables up to 20
interconnected switches to be managed and operated as a single, logical
device with a single IP address and single MAC address. Virtual Chassis
technology enables enterprises to separate physical topology from logical
groupings of endpoints and, as a result, provides efficient resource utilization.
The advantages of connecting multiple switches into a Virtual Chassis Fabric
include:
• better-managed bandwidth at a network layer,
• simplified configuration and maintenance because multiple devices
can be managed as a single device,
• increased fault tolerance and high availability (HA) because a Virtual
Chassis can remain active and network traffic can be redirected to
other member switches when a single member switch fails,
• and a flatter, simplified Layer 2 network topology that minimizes or
eliminates the need for loop prevention protocols such as Spanning
Tree Protocol (STP).
It also allows multiple links to be aggregated together into single links. Thus
the two 10GbE links from each top of rack switch would aggregated into a
single link providing 20GbE of uplink connectivity to the concentrators.
With the introduction of the QFX5100 series of switches the existing Juniper
Virtual Chassis technology is further scaled and enhanced to support a spine-
and-leaf topology that is ideal for high-performance, low-latency data centre
deployments.
In its first instance, this topology, called Virtual Chassis Fabric (VCF), enables
up to 20 QFX5100, EX4300 and QFX3500 switches to be deployed in a
spine-and-leaf configuration, with two to four QFX5100s in the spine and up
to 18 QFX5100, EX4300 and QFX3500 switches as leaf nodes. This
architecture provides any-rack-to-any-rack deterministic throughput and less
than 2 microseconds of latency, while significantly simplifying network
operations through a single point of management.
Some of the key VCF features include:
• Any to Any uniform performance
• Single Managed Fabric
• Scales to 768 1/10G ports
• Integrated Routing Engines
• Loss-less In-band Control Network
• Network ports on Hub and Spoke Switches
• Plug N Play Deployment
• Single Tier Architecture
• Supports variety of interface speeds
• Predictable over-subscription and Performance
One of the switches is configured as the master Route Engine (RE) and the
other is configured as the hot-standby backup RE. This formation provides
the single IP gateway and MAC address for the whole of the VCF whilst
providing a converged control plane across the two spine switches. This
converged control plane removes any convergence issues if a single spine
switches were to fail. In line with every Juniper product, we maintain the
control plane and data plane separation, allowing traffic to flow even if the
whole control plane were to fail.
It’s at this point that additional nodes or leafs can be added to the VCF. This
is implemented on the master RE, by configuring the serial number and
assigning a member number to each node. Once this is completed, each leaf
node is dual connected via either 1/10 or 40GbE fibre or DAC cables to the
two spine switches. The spine use LLDP to communicate with the new leaf
nodes and confirms that the serial number on the leaf nodes match the serial
numbers in the master configuration. Once confirmed, the leaf switches join
the VCF and are now fully operational from the master RE.
Another consequence of joining the VCF is that the uplinks from leaf nodes to
the spine switches are automatically aggregated and renamed virtual chassis
ports or VCP’s. These VCP’s are removed from the configuration so they
cannot be renamed or re-configured.
As more leaf nodes are added to the VCF and registered with the master RE
the control plane and forwarding planes on each device start to become
aware of the other switches in the VCF. The master RE creates the state
information and federates the state to other switches enabling distributed
forwarding. As mentioned, all the fabric links are active-active; there is no
Spanning Tree inside the fabric. Traffic is load balanced on all links to
achieve an internal 1.8 microsecond latency. And it is worth noting that inside
the VCF the default is to do local switching, so only traffic that is destined for
either the spine or further will traverse the spine all other traffic inside the
VCF will be switched locally. As such, you can achieve 550 nanoseconds
inside a VCF, and you can also do 16 way server multi-homing going from
your server into the VCF fabric.
Using the same process as outlined earlier, you can add up to 18 leaf nodes
to a single VCF. As our roadmap information states, this will be changed via
software to 32 leaf nodes in mid-2015. Once you hit this limit, you can then
create a second VCF in the same way. This is where our solution scales to
support the large number of ports in the existing data centres.
As the diagram above shows, the first VCF is replicated as many times as
required to support the number of ports within each Tech Hall.
To provide connectivity between the VCF’s we can implement two things. The
first is to introduce a Core layer which will aggregate the connections from
each VCF, whilst providing ongoing connectivity to the other Data Centres
and the wider network environment. We can also connect the VCF’s together
as well. This then allows another route for traffic to flow in an east-west basis
as opposed to the traditional north south route.
The Core layer would comprise of EX9200 chassis’. These provide dual route
engines, multiple power supplies and fan trays. Our initial solution would be
to place a 6RU (Rack Unit) EX9204 within each Tech Hall and then connect
these to EX9200’s together to form a Virtual Chassis. In implementing the
EX9200’s at this layer we introduce the following benefits:
With the introduction of the EX9200’s we come full circle to our entire
solution per a data Centre (as shown in the diagram below) and Juniper
would look to replicate this across all of the data centres in the same way as
noted above.
The next section of this response covers some of the innovation of our
solution and hardware involved.
Models
Three EX9200 chassis options are available, providing full deployment
flexibility:
• EX9204 – 4-slot, 6 RU chassis that supports up to three line cards
• EX9208 – 8-slot, 8 RU chassis that supports up to six line cards
• EX9214 – 14-slot, 16 RU chassis that supports up to 12 line cards
Fully configured, a single EX9214 chassis can support up to 320 10GE ports
(240 at wire speed for all packet sizes), delivering one of the industry’s
highest line-rate 10GE port densities for this class of feature rich and
programmable switch.
The EX9200 switch fabric is capable of delivering 240 Gbps (full duplex) per
slot, enabling scalable wire-rate performance on all ports for any packet size.
The pass-through biplane design also supports a future capacity of up to 13.2
Tbsp.
R008 Please provide your solution proposal to provide multi-tenancy (same Mandatory
[6.3.2] customer different domains; e.g. Live, Dev, Clone) separation within a data
centre switch infrastructure and details of any assurance for any separation
technologies used.
Juniper The Juniper solution can provide multi-tenancy in a series of ways from
EVPN at the edge of the network to VPLS, VXLAN, MPLS and our SDN
approach to VPN overlay through Juniper contrail to the internal data centre.
Our response first looks at separation at layer 2 and through the use of
Juniper contrail.
VPLS
Juniper Networks offers virtual private LAN service (VPLS) over MPLS, a
standards-based technology that meets the challenges and requirements
associated with data centre interconnectivity. A single physical network can
be partitioned into several logical VPLS instances that are separate and
secure logical L2 networks. This means that all logical instances can be
overlaid on the same physical network, and the same physical network can
appear as different logical VPLS networks. Each VPLS instance appears as a
bridge domain that extends the L2 segments between the different data
centres and offers point-to-multipoint connectivity.
The VPLS instances are mapped to the VLANs, which contain virtualized
resources. VLAN continuity can be maintained across the Wan without
disruption. MPLS is a multiservice transport technology designed to carry
different traffic such as IP packets, a TM cells, Frame Relay or Ethernet
frames, and so on. It inherently allows separation of traffic coming from
various logical network instances by labelling and sending them across
specific, optimized network paths. It has built-in protection and resiliency
mechanisms that allow fast recovery from failures or preventive maintenance.
Using MPLS, different types or forms of traffic can be transported quickly,
securely, and reliably over the same physical infrastructure.
VPLS leverages MPLS as the transport mechanism in the Wan to carry traffic
between what would have previously been a discontinuous L2 network or
VLAN segments at different data centres or sites. High availability is
maintained using MPLS resiliency mechanisms such as MPLS fast reroute
and on-demand paths in the network. Prioritization between applications is
made possible by using quality of service (QoS), and network bandwidth is
managed using traffic engineering (TE), thereby guaranteeing application
performance. Being a standards-based technology, VPLS over MPLS is well
suited to support data centre infrastructure convergence. This is an excellent
choice for a multivendor network, which needs to connect data centres
without having to massively replace equipment. While VPLS allows network
partitioning and extension of L2 segments, MPLS provides a transport
mechanism to carry various types of traffic between data centres.
Both the QFX5100’s, EX4300 and EX9200’s support VPLS
The web link below provides an extensive implementation of VPLS within a
Data Centre environment.
EVPN
To provide Layer 2 stretch services between the data centres, then EVPN
(Ethernet Virtual Private Networks) becomes a logical choice. Ethernet VPN
(EVPN) enables you to connect a group of dispersed customer sites which in
this case would be SDC01, SDC02 and SHP01 using a Layer 2 virtual bridge.
As with other types of VPNs, an EVPN is comprised of customer edge (CE)
devices (QFX5100’s) connected to provider edge (PE) devices. The PE
devices can include an MPLS edge switch (MES) such as the EX9200’s
proposed at the core layer that acts at the edge of the MPLS
infrastructure. To provide multi-tenancy aspect you can deploy multiple
EVPNs within the network, with each EVPN assigned to a series of virtual
routers within virtual servers which in turn connect to customers while
ensuring that the traffic sharing that network remains private.
The MPLS infrastructure allows you to take advantage of the MPLS
functionality including fast reroute, node and link protection, and standby
secondary paths whilst allowing for inter-op between different vendors, as
MPLS/EVPN is an open standard.
For EVPNs, learning between MES’s takes place in the control plane rather
than in the data plane (as is the case with traditional network bridging). The
control plane provides greater control over the learning process, allowing you
to restrict which devices discover information about the network. You can
also apply policies on the MESs, allowing you to carefully control how
network information is distributed and processed. EVPNs utilize the BGP
control plane infrastructure, providing greater scale and the ability to isolate
groups of devices (hosts, servers, virtual machines, and so on) from each
other.
The MESs attach an MPLS label to each MAC address learned from the CE
devices. This label and MAC address combination is advertised to the other
MESs in the control plane. Control plane learning enables load balancing and
improves convergence times in the event of certain types of network failures.
The learning process between the MESs and the CE devices is completed
using the method best suited to each CE device (data plane learning, IEEE
802.1, LLDP, 802.1aq, and so on).
The policy attributes of an EVPN are similar to an IP VPN (for example, Layer
3 VPNs). Each EVPN routing instance requires that you configure a route
distinguisher and one or more route targets. In this case the route reflector
could be a virtual router sitting within the EX9200’s that are or would be
facing out in to the WAN. A CE device attaches to an EVPN routing instance
on an MES through an Ethernet interface that might be configured for one or
more VLANs or this could be a VPLS domain as you can take the VPLS
elements to implement inside the Data Centre and the EVPN to provide Layer
2 between data centres.
The following features are available for EVPNs:
• Ethernet connectivity between data centres spanning metropolitan
area networks (MANs) and WANs
• One VLAN for each MAC VPN
• Automatic route distinguishers
• Active Standby multihoming
Contrail
Another option, which leverages the open standards of MPLS and the newer
functionality of Juniper SDN approach, is through the use of Contrail.
Contrail
The Contrail vRouter is a forwarding plane (of a distributed router) that runs in
the hypervisor of a virtualized server. It extends the network from the physical
routers and switches in a data centre into a virtual overlay network hosted in
the virtualized servers. Contrail vRouter is conceptually similar to existing
commercial and open-source vSwitches such as the Open vSwitch (OVS),
but it also provides routing and higher-layer services (for example, vRouter
instead of vSwitch).
The Contrail SDN Controller provides the logically centralized control plane
and management plane of the system and orchestrates the vRouters.
Virtual Networks
Virtual Networks (VNs) are a key concept in the Contrail system. VNs are
logical constructs implemented on top of the physical network. They are used
to replace VLAN-based isolation and provide multi-tenancy in a virtualized
data centre. Each tenant or an application can have one or more virtual
networks. Each virtual network is isolated from all the other virtual networks
unless explicitly allowed by security policy.
Overlay Networking
virtual overlay network on top of the physical underlay network using a mesh
of dynamic “tunnels” among themselves. In the case of Contrail these overlay
tunnels can be MPLS over GRE/UDP tunnels or VXLAN tunnels.
The underlay physical routers and switches do not contain any per-tenant
state. They do not contain any Media Access Control (MAC) addresses, IP
address, or policies for virtual machines. The forwarding tables of the
underlay physical routers and switches only contain the IP prefixes or MAC
addresses of the physical servers. Gateway routers or switches that connect
a virtual network to a physical network are an exception—they do need to
contain tenant MAC or IP addresses.
The vRouters, on the other hand, do contain per-tenant state. They contain a
separate forwarding table (a routing instance) per virtual network. That
forwarding table contains the IP prefixes (in the case of L3 overlays) or the
MAC addresses (in the case of Layer 2 overlays) of the virtual machines. No
single vRouter needs to contain all IP prefixes or all MAC addresses for all
virtual machines in the entire data centre. A given vRouter only needs to
contain those routing instances that are locally present on the server (that is,
which have at least one virtual machine present on the server.)
In the data plane, Contrail supports MPLS over GRE, a data plane
encapsulation that is widely supported by existing routers from all major
vendors. Contrail also supports other data plane encapsulation standards
such as MPLS over UDP (better multi-pathing and CPU utilization) and
VXLAN. Additional encapsulation standards such as NVGRE can be easily
added in future releases.
The control plane protocol between the control plane nodes of the Contrail
system or a physical gateway router (or switch) is BGP (and NETCONF for
management). This is the exact same control plane protocol that is used for
MPLS L3VPNs and MPLS EVPNs.
The protocol between the Contrail SDN Controller and the Contrail vRouters
is based on XMPP [ietf-xmpp-wg]. The schema of the messages exchanged
over XMPP is described in an IETF draft [draft-ietf-l3vpn-end-system] and this
protocol, while syntactically different, is semantically very similar to BGP.
The fact that the Contrail system uses control plane and data plane protocols
that are very similar to the protocols used for MPLS L3VPNs and EVPNs has
multiple advantages. These technologies are mature and known to scale, and
they are widely deployed in production networks and supported in
multivendor physical gear that allows for seamless interoperability without the
need for software gateways.
As the diagram above shows, the Contrail system consists of two parts—a
logically centralized but physically distributed Contrail SDN Controller and a
set of Contrail vRouters that serve as software forwarding elements
implemented in the hypervisors of general-purpose virtualized servers.
Contrail SDN Controller provides northbound REST APIs used by
applications. These APIs are used for integration with the cloud orchestration
system—for example, for integration with OpenStack via a Neutron (formerly
known as Quantum) plugin. The REST APIs can also be used by other
applications and operator’s OSS/BSS. Finally, the REST APIs are used to
implement the web-based GUI included in the Contrail system.
The Contrail
system provides three interfaces: a set of northbound REST APIs that are
used to talk to the orchestration system and the applications, southbound
interfaces that are used to communicate to virtual network elements (Contrail
vRouters) or physical network elements (gateway routers and switches), and
an east-west interface used
to peer with other controllers. OpenStack and
CloudStack are the supported orchestrators, standard BGP is the east- west
interface, XMPP is the southbound interface for Contrail vRouters, and BGP
and NETCONF are the southbound interfaces for gateway routers and
switches.
So, from XYZ, INC. Data Centre Operations’ point of view Contrail would be
implemented in the following way.
R009 Please provide your solution proposal for DMZ provision within the proposed Mandatory
[6.3.3] architecture. The DMZs may be in a shared or separate architecture. The
DMZs may contain Firewalls, Load Balancers and other network related
devices. The refresh of Firewalls is not within scope of this RFP.
Juniper Juniper would look to implement the same VCF architecture within the DMZ
environment as proposed for the wider data Centre solution, but scaled to
support the size and layout of the DMZ. This would mean the switches can be
the same, installed and configured in the same way in either spine or leaf or
in a daisy-chain architecture. The principles of a single point of management,
distributed forwarding table and flexible scaling would stay the same.
Juniper would look to be provided with more details on the DMZ environment
and would then architect our DMZ approach to suit.
R010 The Nexus switch infrastructure is not being replaced; however please Mandatory
[6.3.4] provide details of how your switch solution can be merged with the current
infrastructure to provide additional capacity with no loss of current
capabilities.
Juniper Data Centre Interconnect using VPLS
Virtual Private LAN Service (VPLS), which provides both intra- and inter-
metro Ethernet connectivity over a common IP/MPLS network is our preferred
method of connecting between different Datacentres. VPLS is a standard
implementation that guarantees interoperability between this is preferable
over propriety implementations such as OTV promoted by a single vendor
and introducing no significant benefits to justify a new technology. VPLS also
ensures a low risk deployment options to XYZ, Inc. Data Centres with
different vendor infrastructures
VPLS/MPLS is an extension to VLANs. Some of these similarities between
VLANS and MPLS are:
• VLANS have dot1q tags similar to the tags in MPLS LSP. The
VLANS use priorities in the packet header (i.e. 802.1p) similar to the
priority fields in MPLS (DSCP/EXP QOS).
• VLANS enable Layer2 segmentation whereas MPLS enables Layer2
and Layer3 segmentation.
Future enhancements
To meet future demands of XYZ, Inc. regarding optimized DCI, localization
and controlled traffic flow, EVPN will be introduced with the ability for an
MPLS edge switch (MES) that acts at the edge to advertise locally learned
MAC addresses in BGP to other MESes, using principles borrowed from IP
VPNs EVPN requires an MES to learn the MAC addresses of CEs connected
to other MESes in the control plane using BGP
XYZ, Inc. Data Centres that have not adopted EVPN will use VPLS as DCI
protocol. This ensures connectivity with new and existing data centres
independent of vendor infrastructure.
R010-1 To be able to be integrated with the existing Cisco topology both physically Mandatory
[6.3.5] and logically in such a way as to allow a phased migration from old to new
infrastructure, without compromising the stability of the network.
Consideration must be given to how the layer 2 VLAN topology, the layer 3
routing instances and the load-balancing functions will be dealt with during
and after the migration.
Juniper
Transition Plan
This section details the recommended migration steps to migrate from XYZ,
Inc.’s existing DC to the new DC network.
We will study the following transitions:
• Legacy to Point-of-Proof
• Nexus-based to Point-of-Proof
• Multi-interconnections concerns
Please note: The Transition Plan proposed in this section has been built with
Juniper’s knowledge of XYZ, Inc. DC networks at the time of writing. The
Transition Plan should be re-evaluated when Juniper is able to assess the
XYZ, Inc. DC environment in more detail and then be validated in the lab,
with impact analysis.
Please note(2): This Transition Plan covers technology aspects only. A more
detailed Transition Plan will be built before the migration, as described in the
‘Transition Methodology’ Chapter. This detailed Plan will be built after a
complete assessment of the XYZ, Inc. environment, and will include, for
example:
• Identification of the transition steps (as provided
in this document)
• Pre-requisites for each steps
• Detailed methodology of each of those steps
• Integration to XYZ, Inc. Tools and Procedures
• Recommended migration validation steps
• Rollback Procedures
• Risk management migration dependencies,
including applications, servers, services
• Roles and Responsibilities
• Network and site maintenance window schedules
• Timetable
EX 9K
Virtual
Chassis
Fabric
InterConnect Solutions
There are 2 options to interconnect the 2 solutions. One is STP free, and
using Multi-Chassis LAG Active/Standby technology; the second one needs
STP to run between the 2 solutions.
• MC-LAG (STP-free):
An MC-LAG in Active/Standby mode is created on the new solution core
(MX). Each link of the MC-LAG connects 1 MX to a different Catalyst. This
link will be used for L2 traffic.
This first approach is the preferred one, to keep both environments as much
independent as possible during the transition, and will be used as reference
for the remaining of the transition.
• RSTP:
Another option, which is less preferred, is to enable STP between existing
and the new solutions. We have the options to use VSTP on the MX
(equivalent of Cisco’s RPVST+), or use the compatibility of PVST+ to operate
with regular STP. If this solution should be selected, the EX9K should be the
Root bridge and back-up root bridge of the STP, to get the following STP
topology:
X X X
• Interconnection capacity:
Technologies like Q-in-Q can be used to make those Gateways transparent
to any VLAN need, so they won’t need any type of specific configuration
except the Q-in-Q to carry all VLANs.
L3 Migration
This is how L3 routing and Server Gateways will be handled during the
migration. The starting point is:
• HSRP used on the example PoD
• Routing (OSPF) for the VLANs of the PoDs is
also hosted on Archipelagos Cat6k
From this point, and as explained so far, servers and
services are migrated to the new solution:
HSRP
X
Routes Routes
L3 Core OSPF
Please Note: The green “Routes” arrows represent how routing is performed
to reach the server VLAN.
At this stage, all servers from both existing and new solution are using the
example HSRP as their Default Gateway:
HSRP
X
Routes Routes
OSPF
HSRP
X
Routes Routes
OSPF
Then we will move the routing from the existing solution to the new one (the
new solution routers will be the preferred path for traffic being sent to the PoD
VLAN, and the Default gateway will be moved also. Please note: When
VRRP transition to master state, it will send a gratuitous ARP. This will allow
all servers to update their ARP table with the new MAC address for their DG
(VRRP and HSRP are using different Mac address ranges). To get the
benefit of this feature, HSRP must be brought down before this transition
happens.
OSPF
Routes Routes
VRRP
OSPF
Routes Routes
VRRP
VLAN Harmonisation
The VLAN harmonization can be handled by the EX9K
_____________________________________________________________________
Juniper: Read and Understood
R015 Remote configuration is currently achieved in-band using SSH, and out-of- Mandatory
[6.5.3] band using asynchronous connectivity to the Cisco console port. Please
indicate how the same configuration methods can be achieved using your
products/solutions.
Juniper All EX and QFX configuration and troubleshooting can be performed using in-
band or out-of-band (recommended) SSH connectivity, although in practice
administrators would be more likely to use Junos Space to make
configuration changes. All insecure protocols can be disabled. All devices
feature a standard console port for serial console access. Juniper devices
also provide a dedicated out of band management Ethernet port, which has
direct connectivity to the route engine. This allows out of band management,
which bypasses the packet-forwarding engine and the in-band forwarding
traffic to allow complete isolated access to the switch. The EX4300 has a
10/100/1000Mbps RJ45 port and the QFX5100 has both a RJ45 port and a
SFP port.
R016 Configuration backup/restore: A backup copy of all network device Mandatory
[6.5.4] configurations are help on a CiscoWorks server so that they can be restored
in the event of device failure. Please indicate this function can be carried out
on your products, and any additional software tools which may be required.
Juniper Junos Space is able to maintain regular configuration backups of all EX and
QFX devices.
In addition to the local configuration history (by default Junos devices store
up to 50 previous configurations locally on the device), devices can be
configured to automatically upload configuration files to an FTP or SCP target
after every change or at specific (e.g. daily) intervals.
R017 Please indicate any additional mandatory or recommended device software High
[6.5.5] tools, which are required for configuration, management, monitoring or
diagnostic purposes.
Juniper All EX, QFX and Junos Space configuration and monitoring can be performed
using standard terminal emulation and web browser tools. No mandatory
device tools are required.
From a management point of view several options exist for the configuration
and control of the proposed data centre design and implementing automation
in to the design.
CLI
The entire proposed devices run the same Operating system – JunOS. Junos
operating system is a reliable, high-performance network operating system
for routing, switching, and security. It reduces the time necessary to deploy
new services and decreases network operation costs. Junos offers secure
programming interfaces and the JunOS SDK for developing applications that
can unlock more value from the network.
Running Junos in a network improves the reliability, performance, and
security of existing applications. It automates network operations on a
streamlined system, allowing more time to focus on deploying new
applications and services. And it's scalable both up and down—providing a
consistent, reliable, stable system for developers and operators. Which in
turn means a more cost-effective solution for your business.
SDN
Path Computation Client (PCC): PCC is an SDN technology available on the
EX9200 Series. PCC enables network programmability to allow IT managers
to dynamically create optimal paths including slices, overlays or virtual paths,
to optimize on-demand bandwidth requirements.
Junos Space
Junos Space is a Network Application Platform, which is fully capable of
fulfilling the role of a Network Management System (NMS) for Juniper
Network Devices and providing integration into the existing Operational
Support Systems (OSS) through leveraging existing capability and further
configuration/customisation where there is specific need. As the Network
Application Platform it integrates readily with Junos devices, such as the
EX4300, EX9200 and QFX5100, through a Direct Management Interface
(DMI) and exposes a set of APIs for integration into North Bound OSS. Junos
Space provides an integrated suite of capability across the FCAPS functions
and provides a web based GUI for end-user client access.
More details on Network Director 1.5 features can be found in the following
link:
http://www.juniper.net/techpubs/en_US/network-director1.5/information-
products/pathway-pages/index.html
Rather than rebuild the virtual switch that comes as part of the hypervisor
software, Junos Space Virtual Control integrates with the hypervisor vendor’s
This is not to suggest that with the right testing and larger enough WAN pipes
that it would not work, and with the use of EVPN this can be done, but an
element of caution should be held.
Our reasoning behind this is as follows. With the Federation of Clouds the
possibly has been put forward that you can connect data centres using WAN
extensions for the purpose of moving VMs without losing sessions. This is
known as Long Distance vMotion by VMware or generically as long distance
live migration. This means that long distance vMotion would provide better-
allocated server resources and maintain application availability between data
Juniper see issues with doing long distance live motion and beyond
debunking the protocols we believe that you should consult with customers
on the use cases and recommend best practices when discussing server
virtualization and networking. Live motion has been demonstrated to work
well within a routing domain and within the data centre using Data Centre
Bridging (DCB) but large scale vMotion over the WAN is mostly unproven and
whilst juniper have tested this to some degree customers should be sceptical
of it.
Long distance live motion over the WAN has limitations due to latency and
bandwidth requirements and the complexity of extending layer 2. Issues
include the potential for misrouted traffic coming to the original data centre
when the VM has moved to the backup data centre, traffic tromboning where
traffic is looped from one router to another, large bandwidth requirements,
storage pooling and storage replication requirements and the complexity of
implementing the bridging architecture. The main problem with moving VMs
around is connecting back to the storage. Storage vendors market replication
schemes that could cause problems for customers and need to be carefully
evaluated.
One has to wonder if customers would implement live motion across data
centres to prepare for a once in a life time event of a total DC shutdown when
they can implement a backup plan that is not nearly as complex or costly and
only causes approximately 30 minutes of down time using shutdown, copy
and restart.
• Layer 3 – Routed Live Motion
If customers want to do live motion they could consider routed L3 live motion
which most hypervisor vendors support (VMware, Microsoft, KVM, Citrix).
This method uses dynamic DHCP / DNS updates and can use session
tunnelling to keep existing sessions alive if needed and it does not require L2
bridging.
Since Microsoft has announced support for L3 live motion with dynamic DNS
updates from the DHCP server and VMware has it in their product and so do
KVM and Citrix, L3 live motion is a possible alternative for customers that
don’t want to deal with L2 bridging.
Protocols
Listed below is a summary of the protocols that Juniper would recommend in
a Data Centre scenario where VMotion is an option.
• Eliminate STP
Juniper propose Virtual Chassis Fabric on the EX and QFX Series switches.
No STP needed.
• Scale and Extend VLANs
Juniper proposes VPLS to VLAN stitching from the EX9200 to the QFX or
you could use QinQ or VLAN groups. Juniper also supports EVPN for L2
stretch which could tie back to either VLAN’s within the DC or a VPLS domain
or Juniper contrail.
• Connect to virtual ports
Juniper could propose Virtual Ethernet Port Aggregator (VEPA), which is part
of the IEEE 802.1Qbg and available in the EX and QFX series or Space
Virtual Control.
• Enable L2 bridging for live motion
Juniper - VPLS is traditional. EVPN for the EX9200, which is a Juniper,
sponsored standards track protocol that overcomes the limitations of VPLS.
• Move IP address
Summary
Whilst the option for vMotion over long distances is possible Juniper believes
that at this time they are untried, untested and to some degree rely on
unreasonable amounts of bandwidth, potentially creating routing problems
and problems reaching storage. Customers should proceed with caution and
carefully consider if they should go long distance vMotion or take a more local
approach to vMotion in the first instance and test long distance vMotion
before implementing it or relying on it as the foundation of their virtual
strategy.
R020 Currently the network infrastructure relies upon physical separation, CAPS High
[6.6.3] approved cryptography and Evaluated firewalls to provide separation
between different security domains.
Please explain how your products could enable us to consolidate physical
resources whilst still maintaining assured separation conformant to CESG
standards such as, but not limited to, GPG12 (Virtualization) , IAS1 Part 2
Appendix C (Assurance) and IAS4 (Cryptography)
Juniper As outlined in response R008, several technologies exist from Juniper to aid
in the secure transportation and separation of traffic with different security
grading’s over a single IP infrastructure. As discussed in responses R008
Juniper can use VPLS to provide separation between the different traffic
types and by creating separate VPLS domains for traffic with different
security grades. The VPLS domains could be stretched between data centres
by either extending VPLS in to the WAN or via the use of EVPN, which would
be tied to the VPLS domains at each Data Centre.
Juniper can also encrypt VPN’s, which would allow traffic deemed restricted
(in the old scheme) or secret (Government new scheme) to not only be
isolated with its own VPLS domain but also encrypted. Traffic graded as
official under the new marking scheme would also be allocated to its own
VPLS domain, but would require no encryption.
In both of these methods, Juniper believe this would provide the same
standard of security as the PSN and Juniper could implement virtual version
of the CPA approved SRX to provide a complete solution.
Juniper would like to point out at this time we believe the following is still true
from CESG. For the moment, cross-domain solutions (where a virtual
machine is connected to more than one security domain) are specifically
excluded and that data sharing is similarly excluded, though it remains
acceptable to use existing approaches to transfer data (for example, passing
data from one security domain to an external cross-domain solution, and then
returning it to the other security domain). The specific example of writing to
removable media, un-mounting that media, and then remounting it in the
other security domain, is permitted provided the data owners have evaluated
the risks of importing malware or leaking data, and have agreed how they are
going to handle the risks.
With this in mind, we would work with XYZ, Inc. to find a suitable solution,
which would meet CESG requirements and provide the cost saving of not
having to implement separate solutions.
R021 Please explain how your devices are able to provide increased connectivity High
[6.6.4] and throughput, and how they would enable XYZ, Inc. to:
• Increase speed at which new connections are deployed
• Upgrade to 10GigabitEthernet server connectivity to provide
consolidation of existing bulk 1GE copper connections
• Aid planning for short term peak workloads
Juniper To provide this functionality we have specified mixed virtual chassis fabrics of
EX4300’s to provide RJ45 copper based connectivity and QFX5100’s to
provide 1/10GbE SFP based connections. The QFX5100-48S as described
earlier within our responses provides 48 ports of 1/10GbE and 6 x 40GbE
uplink ports. The 48 ports of 1/10GbE are dual speed ports with no limitation
on what can be 10GbE or 1GbE. In providing dual speed ports as standard,
XYZ, Inc. have a low cost migration policy towards 10GbE virtual servers
when older 1GbE server are no longer required. The only cost in this
migration would be for SFP optics or DAC twinax cables as all of the ports on
the QFX5100 are SFP based.
The same principle applies to uplink ports. Juniper have specified 10GbE
connectivity using the 6 x 40GbE uplink via a 40GbE to 4 x 10GbE breakout
cable to allow 10GbE uplinks on day one with the option as traffic increases
to utilise 40GbE via the introduction of 40GbE QSFP’s and 40GbE DAC
cables.
In introducing this capability from day one as standard across the design,
short term peak workloads can be managed as the migration to 10GbE virtual
servers gathers pace whilst providing a simple platform in the form of VCF to
manage multiple switches from a single management instance.
At end of lease, XYZ, Inc. would have the following End of Lease
options:
• Please see all quotes in Pricing Appendix for Discount structures and
Rebates
We have also provided a TCO Savings report, citing savings of 79% in
power and cooling and 76% in space savings.
Sales Services
Table 13.6: Sales Service Requirements
Ref Requirement Weighting
R023 Pre-sales: Please describe the services and/or resources that could be Mandatory
[6.7.1] available to XYZ, Inc. for both commercial and technical help with the design
and pricing of new project requirements.
Juniper Juniper offers Pre sales resources for Named Accounts, such as XYZ, INC.
Juniper has an assigned security cleared pre-sales resource for XYZ, Inc. , a
named account manager , a partner account manager, a partner technical
account manager, and a resource pool of Sales engineers all based in the
UK, to provide commercial and technical support in a pre-sales environment.
R024 Post-sales: Please describe any post-sales services and/or resources which Mandatory
[6.7.2] could be available to XYZ, Inc., in order to assist with the ongoing support
and maintenance of deployed solutions including, especially regarding any
design/feature/function compatibility and interoperation.
Please provide ‘bug-scrubs’ for proposed solutions.
Juniper Juniper can offer an advanced level of support based on your requirements
and the complexity of your network. In order to assist XYZ, Inc. with the
ongoing support and maintenance of the deployed solutions Juniper has
included optional pricing for the Juniper Care Plus service, in addition to the
Juniper Care maintenance pricing with 4 hour hardware replacement as
requested. The full service descriptions for both Juniper Care and Juniper
Care Plus are provided in the Appendix.
The Advance Services provided through Juniper Care Plus enable you to
effectively manage the ongoing lifecycle tasks of design changes, software
feature and functionality review, network change and associated
implementation planning and software recommendation review of your
software requirements, assessment of software upgrade risk, analysis of
potential impact on your network, and recommendations on a target software
release that can best meet your requirements.
Juniper Juniper could provide a Resident Consultant to work alongside the XYZ, Inc.
team for either 6 or 12 months. The Resident Consultant works daily with
XYZ, Inc. staff at your location, becoming intimately familiar with your unique
processes and requirements, your network’s specific configurations and
challenges, and your staff’s strengths and limitations. Your Resident
Consultant helps you avoid many network issues before they arise, and is
fully prepared to help resolve issues as quickly as possible when they do
occur. The Resident Consultant assists with network design, deployment, and
support process definition and documentation, deployment and
implementation of Juniper Networks equipment, and post cutover activities for
your network.
• network
• Applying extensive industry experience to optimize network
performance and proactively
• analyze potential enhancements
• Evaluating technical specifications for interoperability
Consultancy based project time, is also available, please see the SOW in
appendices
Product Introduction
A new network enterprise platform will represent a significant outlay for XYZ, Inc. to develop
support capabilities, build a service model and execute the migration of services from existing
platforms. The resources and time taken to introduce a new platform to live service are a
significant risk. For each of the below requirements please state how your proposal would reduce
the impact and burden of the risks and costs to XYZ, Inc.
_____________________________________________________________________
Juniper: Read and Understood
Juniper are well versed in cross training, we can offer * to Junos courses, as
well as helpful conversion kits to assist ongoing learning.
Juniper offers Publicly scheduled courses are also available at a per person
cost of approximately $700 per person per day. Alternatively Juniper can
provide Private class training for up to 16 students per class at $7,000 per
day (private, onsite with some customization if required).
To enable and support the XYZ, Inc. operation team during capability
development and network transition, Juniper will provide the following new
customer services which are complimentary.
This training has been specifically calibrated for multi-tiered network teams
and includes:
Service Management- Provided on a remote basis for ninety (90) days and
include support for key operational tasks as the Customer becomes familiar
with Juniper Networks’ support infrastructure. The start date is one week after
receipt of the purchase order.
eOperational Review
- Organizational roles
- Maintenance procedures
- Upgrade procedures
R026 Service model – this includes all aspects to build a service around the Mandatory
[6.8.2] infrastructure such as:
• Integration with the XYZ, Inc. enterprise management Framework:
work required to configure and deploy management tools for XYZ,
Inc.’s needs
• Support routes: setting up correct support routes for first, second and
third line support with correct paths for escalation
• Validation of XYZ, Inc. solutions: work required by XYZ, Inc. to validate
standard configurations on new hardware/operating system
combinations – a common issue is availability of hardware on which to
do the necessary validation
Please suggest possibilities for reducing the time and associated cost to XYZ,
Inc. to develop these capabilities which would be necessary before any new
platform could be considered ready for live service. If this includes
chargeable professional services please include appropriate costs in your
proposal.
Juniper Juniper’s Professional Services organization includes the most experienced
and knowledgeable internetworking professionals in the industry. Our team
appreciates the complexities and the subtleties inherent in large-scale design
and can assist XYZ, Inc. with respect to the proposed project.
The transformation methodology is based on four phases, with the first three
being part of the Plan stage of the network lifecycle:
To support XYZ, Inc. during Build and Operate phase Professional Services
(PS) can provide Resident Consultant (RC) and Test, Implementation and
Migration (TIM) Engineers who can complement the customer engineering
and operation teams.
Often these APIs can be exploited by external systems directly and can be
integrated by the customer’s systems integrators/partners. Further Juniper
Networks’ Processional Services can provide a range of supporting services
from consultancy through to configuration, customisation, implementation and
testing of the Junos Space API and integration.
To provide not only a cost efficient but also time efficient method of validating
the proposed solution, Juniper would look to validate the proposed solution
with XYZ, Inc. through a Proof of Concept at our labs, which would allow both
Juniper and XYZ, Inc. to define the baseline architecture and a common
configuration set across the equipment not only to test different POC
scenarios but also to carry forward to an on-site test.
Once the POC is complete, we can then take these configuration examples
and implement them in to an on-site testing lab such as SHP01 and produce
cut-down version of the full architecture utilising the equipment proposed for
SHP01 and produce a baseline configuration as we move towards a live
implementation.
Juniper would envisage that the equipment proposed for SHP01 would stay
in that location and would be used for further software testing as the live
solution evolves.
Juniper is also able to provide access to Juniper Cloud Labs, which would
provide XYZ, Inc. with another option for testing software and configurations
in a virtual environment. As this is a virtual system run within Juniper’s private
cloud, XYZ, Inc. and Juniper could produce an example of their network and
then XYZ, Inc. people can access this private cloud to test configurations
before retesting in a lab environment.
In each case these services would allow both Juniper and XYZ, Inc. a
process to validate the design and configuration prior to live implementation
and provide testing for further changes to be tested.
R027 Product build definition – represents the work required to develop a standard Mandatory
[6.8.3] build:
• Standard OS image
• Hardening of OS configuration as required to meet XYZ, Inc. security
standards
• Template containing standard base configuration (including device
remote access for config/support and management tool access) and
typical interface/routing configuration
Again, please describe the help you can offer to reduce the impact of
ensuring all the above capability is in place as will be necessary for the
platform to be available for live service. If this includes chargeable
professional services please indicate appropriate costs in your proposal.
Juniper Juniper prides its self on the development and stability of the Junos OS
software. This is in part due to our focus on working with customers to
provide them with the best version of software possible for their deployment
and working with them to introduce new features in to Junos.
• One operating system: Reduces time and effort to plan, deploy, and
operate network infrastructure and provides the same configuration
experience across all platforms
• One release train: Provides stable delivery of new functionality in a
steady, time-tested cadence.
• One modular software architecture: Provides highly available and
scalable software that keeps up with changing needs
This means that XYZ, Inc. can rely on a standard OS image across all of the
platforms proposed in our solution and after testing standardised across a OS
image that meets all of their requirements.
From a hardening point of view, Junos already goes through several beta
versions of testing both within Juniper and with specific customers before
being released to market. With this in mind, Juniper would sign XYZ, Inc. up
to the Junos Beta program. The purpose of the Beta program is to partner
with customers to test and validate new features prior to FRS release and
also provide customers an opportunity to share with Juniper on their feedback
about our product usability, feature and performance.
In this way, XYZ, Inc. can test new software features prior to release and
decide if these features and the new software version could be implemented
in to the network. Another aspect of the beta release program is that the
Junos engineering team can test functions and features specific to XYZ, Inc.
prior to beta release. Your Juniper aligned SE with XYZ, Inc.’s permission
would provide the Junos team with an in-depth design and configuration of
your network. The Junos team would then test new software against this
design in their labs and provide XYZ, Inc. with the results. This allows XYZ,
Inc. to have Junos developed with their infrastructure in mind.
Your aligned SE would provide beta access and sign up to the programme
upon order of the support contract. This is a free service that Juniper provides
for the benefit of both Juniper and it customers.
To complement the above services, Juniper also has a cloud based virtual
environment called Juniper Cloud Labs, (JCL). JCL is a virtual environment
from Juniper Networks that can reduce the costs of network planning and
modelling by giving users access to a virtual environment where they can
create and run elements and networks that run the Junos operating system.
JCL reduces operational costs and limits risk by providing a virtual networking
lab where flawless network implementations can be modelled and designed.
When built, these networks can be used for training, network modelling,
planning for new services or examining "what-if" scenarios for the installed
network.
Networks created in the JCL environment run the same Junos operating
system that powers Juniper routers, switches and security platforms for
unparalleled realism and accuracy. They can be easily and quickly scaled
down or up to hundreds or thousands of nodes for a level of scale and
accuracy not possible with alternative network simulation approaches.
This would provide XYZ, Inc. another option in meeting their security
hardening processes and testing new features and functions without the use
of a test lab or running in the live environment.
The web link below provides access to more details on this service:
https://jlabs.juniper.net/jcl/
What all of the above items provide is the ability to test and configure a
standard configuration template and OS version for devices prior to
implementation in a live environment, thus reducing risk and time to
implementation.
The components are divided in to four areas, Design, Assess, Plan and
Migration.
Design
Effective design is essential to reducing risk, delays, and the total cost of
network deployments. This is the element of the process that ties the sales
and solution team to the customer’s technical team and in turn the Juniper
Pro-services team.
It’s at this point that these teams come together to address the following:
• Build upon the initial solution
• Work out any additional requirements not covered by the RFP
• Gain a thorough understanding of the customers immediate and
long-term business requirements
• How the solution will address these requirements and any future
performance and capacity issues
• Test the initial solution in a POC to validate the architecture and how
the solution will interoperate with the customers’s existing solution
and if they can run in parallel with each other
The output from these processes allows Juniper and yourselves to finalise an
architectural foundation capable of supporting your current needs with the
scalability required to allow you to take advantage of future opportunities. It
also proves the solution so Juniper and XYZ, Inc. can move to the next stage,
which is Assess.
Assess
This starts the process in which Juniper Professional Services become the
project leads from Juniper’s point of view and the sales team step back.
Juniper Professional Services use the following diagram to show their
methodology in migrations.
The Assess process defines three main scopes, which are project definition,
project scope and identifying the potential threats to the implementation and
migration.
Project Scope audits the existing infrastructure, identifies any potential issues
that may affect the implementation and categorises them and assigns the
right people to migrate that risk.
This then folds in to the threat identification process so all issues prior to
migration and live service implementation have been tested and signed off
with the right stakeholders and any solution adjustments are agreed.
Plan
The plan element of the Pro-services engagement draws up the necessary
documentation and project scope taking in to account the items already
covered so all aspects of the project are ready to be implemented together
with the potential risks, there dependencies, associated risks and how any
unforeseen issues can be mitigated.
Migrate
The Migrate process is the implementation of the project taking in to account
all of the items noted previously and that the implementation is phased to
mitigate any potential problems. A phased approach would also allow both
the existing and new services to be run in parallel with service from the old
moved to the new once a suitable period of time has passed with both
services running.
R029 Introduction requirement – covers additional costs for the introduction of new Mandatory
[6.8.5] infrastructure, this can include additional resources such as:
• Project management
• Project architect
Please describe the support available to reduce the additional burden on
XYZ, Inc. of coordinating the introduction of a new hardware platform. If this
includes chargeable resources please indicate appropriate costs in your
proposal.
Juniper Project Management
Essential to the success for the DC Network Fabric is clear concise project
management that ensures that the project runs smoothly and that
communication between Customer and Juniper is strong, including the use of
a shared file structure. Juniper Networks has developed the Juniper Project
Management Methodology (JPMM) based on the standardized PMI and
PRINCE2 project management methodologies, utilizing the best practices from
both and incorporating additional internally developed processes.
• A Project Architect can oversee all design work and act as the
Juniper Technical Lead.
The supplier should indicate which of the listed requirements in this document
can be proved in their Proof of Concept environment
Juniper Juniper can provide proof of concept (POC) against all of the listed
requirements noted above in two ways, either via an off-site POC in one of
Junipers purpose built POC labs or via an on-site POC.
Juniper’s purpose built POC labs are located in Sunnyvale US, Westford US,
Amsterdam NL, Hong Kong and Singapore. Each POC lab can provide a
complete demonstration lab of the proposed solution with full fault and feature
testing and configuration. It can also provide interoperability with third party
products that are part of the solution or the client’s existing network. In each
case we carry a large volume of other vendors’ equipment to allow us to
simulate not only the Juniper network but also an existing customer network
to fully test prior to implementation.
For an on-site POC, the principles are the same, but Juniper would be more
reliant on the customer’s lab environment. Juniper would supply the
necessary equipment from Juniper loans in Amsterdam and ship to site.
Juniper would then resource Juniper based SE’s and Professional Services
team to run the POC with the customer and replicate the same process that
Juniper would use in one of their dedicated POC labs.
It should be noted that a cost may be involved for the professional services
element of this scenario, where as a Juniper based POC would be free
except for travel related costs.
Environmental Requirements
Table 13.8: Environmental Requirements
Ref Requirement Weighting
R031 Please describe initiatives, both technology and methodology, your Mandatory
[6.9.1] organization is implementing to reduce the environmental impact of your
products and services in the following specific areas:
• Manufacture
• Power efficiency
• Disposal
Juniper Juniper's greatest impact on the environment is through our products, so
we're reducing our carbon footprint with products that are environmentally
responsible in all phases of their life cycles, a complex challenge that
demonstrates our commitment to protecting the environment.
We monitor compliance with local and international laws in all of our locations
worldwide, and work with governments, industry partners, and consortia, to
harmonize regulations with innovation. We collaborate with governments,
industry vendors, and customers, to develop and implement energy metrics
that measure the efficiency of networks.
Juniper provides recycling support for our equipment in order to meet the
European Union's WEEE Directive. All of our products introduced after
August 13, 2005, are marked or documented with an icon depicting a
crossed-out wheeled bin with a bar underneath per the requirement.
Additional Requirements
Warranty/Support
Warranty/Support must be in place to cover these units for the period of their service life and
must commence from installation on site rather than from delivery to the bonded warehouse.
Please clearly specify warranty provided with each item/product proposed.
_____________________________________________________________________
Juniper: Read and Understood
Juniper’s Standard Warranty is applied to the QFX. Our Enhanced lifetime warranty is applied to
the EX range, so covering the EX9204 and the EX4300.
Standard warranty covers 1 year and has a start date within 90 days from the original shipment to
XYZ, Inc., please see appendix one, for complete details.
Enhanced Warranty covers 5 years and has a start date within 90 days from shipment date, please
also see appendix one
Juniper would be prepared to offer some flexibility, once we understand the installation schedule
and can seek VP approval for adjournment, within reason.
The Support and maintenance start date can be postponed for up to 6 months after purchase;
however, if the support is not procured within one year from the Hardware purchase date, then
Juniper have the right to impose a reinstatement fee after that time.
Maintenance
During the migration XYZ, Inc. will have a number of existing support and maintenance contracts,
the vendor shall include in their responses how they plan to offset these costs
Regardless of existing XYZ, Inc. support and maintenance contracts with incumbent suppliers, any
response to this RFP MUST include full details of proposed support and maintenance contracts
(including costs). Details that must be included are as follows:
• Warranties
• Recommendations as to engineer support/location
• Type of cover proposed – field support or on-site support
• Direct access to the supplier technical support (people and documentation) for raising
cases and troubleshooting
• Cover periods proposed for type of cover as detailed above (XYZ, Inc. anticipate peak
period utilisation to be 7 day x 24 hour, low periods to be 5 day x 9 hour approx.)
• Annual charges for parts inclusive policy
• Pricing over a 5-year model
• Cost of hardware replacement if faulty unit cannot be returned – any additional cost
needs to be specified if this is not included in the support contract
• A 4 hours SLA for hardware support
Please describe any additional offerings that would include the support of the current switch
infrastructure including costs.
_____________________________________________________________________
Juniper: Read and Understood
With Reference to all of the above queries, they have been answered and incorporated within the
bid document.
References
Please provide two reference sites to support the information within your proposal. These should
include references that demonstrate the following:
• Experience in government accounts
• Interoperability with Cisco Catalyst/Nexus-based Data Centre network deployments
These references will be used to aid marking against the requirements described above.
_____________________________________________________________________
Juniper: Read and Understood
The Cabinet Office also uses Juniper Switches for its internal infrastructure.
The solution is divided between three sites, Prime Minister’s Office and two secure Data centres.
Within each location is a dual core layer connects to a series of spine and leaf virtual Chassis’s.
These three core locations are connected to each other via dual 10GE dark fibre to create a
resilient MPLS ring.
The core layer at each site is dual connected to form a Virtual Chassis. Due to the security
requirements at all sites, users PC’s connect to the network via direct fibre as such the access
layer of the solution consists of 1GbE Fibre switches acting as leaf nodes connecting back to a
spine layer of 10GbE switches to form a Virtual Chassis’s. This is then replicated to form a series
of virtual Chassis’s which make up the user connectivity at each site and is duplicated in to the
data centres for server connectivity
• NYSE Euronext needed next-generation data centres, consolidating from 10 to 4, to meet its
stringent requirements for high performance, extraordinary reliability, low latency, scalability,
and trading predictability
•
• Juniper deployed EX Series Ethernet Switches (800+), QFX 3500 (125+) and MX Series 3D
Universal Edge Routers (40+)
• Juniper’s security presence at NYSE includes Remote Access Infrastructure, Site to Site
VPN, and Active Directory User Authentication
Supporting Material
Vendors can include any other supporting material within their proposal that they feel adds value
to XYZ, Inc.’s strategic aims and brings innovation whether through technical capability,
professional services or commercial offerings. This material will be used to aid marking against the
requirements already defined.
_____________________________________________________________________
Juniper: Read and Understood. Please see Appendix 1 for supporting technical datasheets.
Juniper would also like to offer XYZ, Inc. an ‘On boarding Package’. This is a program that
eliminates potential risk when migrating to Juniper
We offer it as a free of charge value add, when Juniper have no current estate in an end user
Customer, such as XYZ, INC. * The part number is within the Hardware Pricing as Zero
Commercial Offering
Please see Appendix Two, Pricing for Hardware, Support & Maintenance and
Professional Services, Value Add and TCO.
Legal
Please note that Juniper Networks is the manufacturer of the equipment proposed in this RFP.
Accordingly, Juniper Networks and XYZ, Inc. will not have a direct contractual relationship. Any
contract for purchase of hardware, installation and commissioning templates will therefore need
to be negotiated and agreed upon between XYZ, Inc. and ACME. Juniper Networks will not be
bound by the terms and conditions of such contract between XYZ, Inc. and ACME.
EX9200.pdf EX4300.pdf
• QFX5100 datasheet
QFX5100.pdf
Junos Space.pdf
A Methodology for
Transformation of Data Center Networks.pdf
• JTAC Guide
JTAC Guide.pdf
Standard Product
Enhanced Limited
Lifetime Warranty on EX.pdfWarranty.pdf
NOTE: You can access the documents from the Resources section of the module.