Slide 3
Slide 3
Architecture
Infrastructure Building Blocks
and Concepts
Networking – Part 2
(chapter 9)
Network Virtualization (VLAN)
• VLANs:
Allow segmenting a network at the data link layer
Allow end stations to be grouped even if they are not physically connected to the same
switch
Can adapt to changes in network requirements and allow simplified administration
Enhance security by preventing traffic in one VLAN from being seen by hosts in a
different VLAN
• A router that supports VRF can have multiple virtual routers implemented.
One or more interfaces on the router can be part of a VRF, but none of the VRFs share
routing information. Packets are only forwarded between interfaces that are in the
same VRF
• Core layer
This is the center of the network
• Distribution layer
An intermediate layer between the core layer in the datacenter and the access switches
in the patch closets
Combines the access layer data and sends its combined data to one or two ports on the
core switches
• Access layer
Connect workstations and servers to the distribution layer
For servers, located at the top of the individual server racks or in blade enclosures
For workstations, placed in patch closets in various parts of the building
Spine and Leaf topology
• In an SDN, a simple physical network is used that
can be programmed to act as a complex virtual
network
• Such a network can be organized in a spine and
leaf topology
• Characteristics:
The spine switches are not interconnected
Each leaf switch is connected to all spine switches
Each server is connected to two leaf switches
The connections between spine and leaf switches typically
have ten times the bandwidth of the connectivity between
the leaf switches and the servers
Spine and Leaf topology
• Benefits:
Highly scalable
There are no interconnects between the spine switches
Simple to scale
Just add spine or leaf servers
With today’s high density switches, many physical servers can be connected using
relatively few switches
Each server is always exactly four hops away from every other server
Leads to a very predictable latency
Network teaming
• Latency is defined as the time from the start of packet transmission to the
start of packet reception
• Latency is dependent on:
The physical distance a packet has to travel
The number of switches and routers the packet has to pass
• Rules of thumb:
6 ms latency per 100 km
WANs: Each switch in the path adds 10 ms to the one-way delay
LANs: add 1 ms for each switch
Latency
• One-way latency: the time from the source sending a packet to the
destination receiving it
• Round-trip latency: the one-way latency from source to destination plus
the one-way latency from the destination back to the source
• “ping” can be used to measure round-trip latency
Quality of Service (QoS)
• Quality of service (QoS) is the ability to provide different data flow priority
to different applications, users, or types of data
• QoS allows better service to certain important data flows compared to less
important data flows
• QoS is mainly used for real-time applications like video and audio streams
and VoIP telephony
Quality of Service (QoS)
• Firewalls separate two or more LAN or WAN segments for security reasons
• Firewalls block all unpermitted network traffic between network segments
• Permitted traffic must be explicitly enabled by configuring the firewall to
allow it
• Firewalls can be implemented:
In hardware appliances
As an application on physical servers
In virtual machines
Segmentation
Micro segmentation
IDS/IPS
• RADIUS
Authenticates users or devices before granting them access to a network
Authorizes users or devices for certain network services