Unit Iv Software Defined Networks NT
Unit Iv Software Defined Networks NT
Characteristics of
Software-Defined Networking. SDN- and NFV-Related Standards. SDN Data
Plane. Data Plane Functions. Data Plane Protocols. OpenFlow Logical Network
Device. Flow Table Structure. Flow Table Pipeline. The Use of Multiple Tables.
Group Table. OpenFlow Protocol. SDN Control Plane Architecture. Control Plane
Functions. Southbound Interface. Northbound Interface. Routing. ITU-T Model.
OpenDaylight. OpenDaylight Architecture. OpenDaylight Helium. SDN
Application Plane Architecture. Northbound Interface. Network Services
Abstraction Layer. Network Applications. User Interface.
SDN Architecture:
In traditional networks, the control and data plane are embedded together as a
single unit. The control plane is responsible for maintaining the routing table of
a switch which determines the best path to send the network packets and the
data plane is responsible for forwarding the packets based on the instructions
given by the control plane. Whereas in SDN, the control plane and data plane
are separate entities, where the control plane acts as a central controller for
many data planes.
There are many approaches that lead to the development of today’s Software
Defined Networks(SDN). They are:
Forces
4D approach
Ethane
4D approach:
Principles of Ethane:
High-level policies should inspect the network
Routing should follow High-level policies.
There should be a connection between packets and their origin in the
network
Characteristics of Software
Software is defined as a collection of computer programs, procedures, rules,
and data. Software Characteristics are classified into six major components:
Software engineering is the process of designing, developing, testing, and
maintaining software.
The characteristics of software include:
1. It is intangible, meaning it cannot be seen or touched.
2. It is non-perishable, meaning it does not degrade over time.
3. It is easy to replicate, meaning it can be copied and distributed easily.
4. It can be complex, meaning it can have many interrelated parts and features.
5. It can be difficult to understand and modify, especially for large and complex
systems.
6. It can be affected by changing requirements, meaning it may need to be
updated or modified as the needs of users change.
7. It can be affected by bugs and other issues, meaning it may need to be
tested and debugged to ensure it works as intended.
Defined Networking:
The physical separation of the network control plane from the forwarding
plane, and where a control plane controls several devices.
The data plane is also known as the user plane, the forwarding plane or the
carrier plane.
Architecture:
The three layers in an SDN architecture are:
Application: the applications and services running on the
network
Control: the SDN controller or “brains” of the network
Infrastructure: switches and routers, and the supporting
physical hardware
Northbound APIs:
Southbound APIs:
The SDN controller communicates with the network
infrastructure, such as routers and switches, through
southbound APIs.
In real time, the controller can change how the routers and
switches are moving data.
SDN Controllers
An SDN controller is the software that provides a centralized view of and control over the entire
network. Network administrators use the controller to govern how the underlying infrastructure’s
forwarding plane should handle the traffic.
The controller is also used to enforce policies that dictate network behavior. Network
administrators establish policies that are uniformly applied to multiple nodes in the network.
Network policies are rules that are applied to traffic that determines what level of access it has to
the network, how much resources it is allowed, or what priority it is assigned
. Having a centralized view of the network and the policies in place makes for simpler
management of the network that is more uniform and consistent.
The application, control, and infrastructure layers are kept separate in SDN and communicate through
APIs. Source: Open Networking Foundation
SDN Benefits
SDN offers a centralized, programmable network that can dynamically provision network resources so as
to address the changing needs of businesses. It also provides the following technical and business
benefits:
Direct programmability: SDN network policy is directly programmable because the control functions are
decoupled from forwarding functions, which enables the network to be programmatically configured by
proprietary or open source automation tools, including OpenStack, Puppet, Salt, Ansible, and Chef.
Centralized management: Network intelligence is logically centralized in SDN controller software that
maintains a global view of the network, which appears to applications and SDN network policy engines
as a single, logical switch.
Reduced capex: SDN potentially limits the need to purchase purpose-built, ASIC-based networking
hardware, and instead supports pay-as-you-grow models with its scaling capabilities. Most switches on
the market support SDN capabilities and software like OpenFlow (an SDN communications protocol).
Whether it is in a data center or other network, if the infrastructure contains switches with SDN
capabilities, they simply need to have the option activated. A massive truck roll is not needed to rip and
replace the infrastructure.
Reduced opex: The ability to automate the updates to the network’s software means there is no need to
rip and replace the whole infrastructure when business needs or network demand necessitate a change.
Additionally, policies can be uniformly spread network wide, reducing the chance for human error when
updating the network. Automation takes over the monotonous tasks from network administrators and
operators, which reduces the overall network management time.
Agility and flexibility: SDN can help organizations rapidly deploy new applications, services, and
infrastructure to quickly meet changing business goals and objectives because whenever something new
is created, a simple update deploys it network-wide.
SDN Challenges
SDN is not without its downsides. As with everything in the IT industry, there are security issues, scaling
problems, and a lack of widespread industry cooperation.
Security risks of centralized management: While this makes networking easier, it is also a security risk.
Centralized management is a single point of attack and if it goes down, the whole network is affected.
SDN controller bottleneck: When there is only a single instance of an SDN controller, it can become a
bottleneck for a network with a large amount of traffic, routers, and switches. There is simply too much
to communicate with for one instance of a controller.
The virtualization principles that SDN introduced to the networking world can also be used in vehicle-to-
everything (V2X) communication for autonomous driving. SDN software normally only covers a single
data center, however, it can extend over an enterprise’s entire campus. By using SDN technology, a
campus can simplify the wireless and wired network connections, whether it’s WiFi or Ethernet,
centrally manage them, and automate services.
OpenDaylight:
The typical OpenDaylight solution consists of five main components: the OpenDaylight APIs,
Authentication, Authorization and Accounting (AAA), Model-Driven Service Abstraction Layer (MD-SAL),
Services and Applications, and various southbound plug-ins.
The following diagram picture shows a simplified view of the typical OpenDaylight architecture. In this
chapter, the basic functionality of the main components will be described.
However, a detailed description of particular OpenDaylight components is out of scope of this guide.
The platform also provides a framework for Authentication, Authorization and Accounting (AAA), and
enables automatic identification and hardening of network devices and controllers.
OpenDaylight APIs
The northbound API, which is used to communicate with the OpenStack Networking service (neutron), is
primarily based on REST. The Model-Driven Service Abstraction Layer (described later) renders the REST
APIs according to the RESTCONF specification based on the YANG models defined by the applications
communicating over the northbound protocol.
The business logic of the controller is defined in Services and Applications. The basic overview of
services and applications available with the Boron release can be found on the OpenDaylight Boron
release web page. A more detailed view can be obtained from the Project list. The OpenDaylight project
offers a variety of applications, but usually only a limited number of the applications is used in a
production deployment.
The Model-Driven Service Abstraction Layer (MD-SAL) is the central component of the Red Hat
OpenDaylight platform. It is an infrastructure component that provides messaging and data storage
functionality for other OpenDaylight components based on user-defined data and interface models.
MD-SAL, in MD-SAL based applications, uses the YANG models to define all required APIs, including
inter-component APIs, plug-in APIs and northbound APIs. These YANG models are used by the
OpenDaylight YANG Tools to instantly generate Java-based APIs. These are then rendered according to
the RESTCONF specification into the REST APIs and provided to applications communication over the
northbound protocol.
Using YANG and YANG Tools to define and render the APIs greatly simplifies the development of new
applications. The code for the APIs is generated automatically which ensures that provided interfaces
are always consistent. As a result, the models are easily extendable.
Applications typically use the services of southbound plug-ins to communicate with other devices,
virtual or physical. The basic overview of southbound plug-ins available with the Boron release can be
found on the OpenDaylight Boron release web page. The Project list shows them in more details.
The Red Hat OpenDaylight solution (part of the Red Hat OpenStack Platform) consists of the five main
parts, but the selection of applications and plug-ins is limited to a certain number only. The Controller
platform is based on the NetVirt application. This is the only application currently supported by Red Hat.
In the future releases, more applications will be added.
Most applications will only use a small subset of the available southbound plug-ins to control the data
plane. The NetVirt application of the Red Hat OpenDaylight solution uses OpenFlow and Open vSwitch
Database Management Protocol (OVSDB).
The overview of the Red Hat OpenDaylight architecture is shown in the following diagram.
Opendaylight Helium
Opendaylight Helium is the second release after Hydrogen and it was
released late September 2014. Earlier, I was using Opendaylight Hydrogen
release and recently, I tried out Opendaylight Helium release. In this blog, I
have shared some of my experiences with Helium.
L2 switch application:
I tried out the L2switch application that comes with Helium. I installed
following features in Karaf.
I connected Mininet with OF1.3 and a simple topology with 1 switch and 2
hosts and I was not able to ping between the hosts. After discussing
with Opendaylight mailer, I realized I need to upgrade my Open vswitch
version. I had Openvswitch version 1.4.6 and that worked with Hydrogen. I
had to upgrade Openvswitch to 2.1.3 version and the Mininet ping worked
after that.
Hydrogen had REST api through which we can access the controller data
both for configuration and monitoring. With Helium, controller data is
maintained as a Yang model which uses MD-SAL that gets exposed using
RESTCONF. I had earlier written a Python library for Opendaylight
Hydrogen to access inventory, topology, flows etc. I had to rewrite the
library to get it working for Helium since RESTCONF is used and the the
grouping of information has changed with Helium. Helium python library that
I have written can be accessed from here. To browse through complete
RESTCONF tree as well as for configuring, following
link(http://localhost:8181/apidoc/explorer/index.html) can be used while
controller is running. Another approach to try out REST apis is through
postman client which is available for Chrome browsers.
These programs connect their requirements and response of network by relating with the SDN
controller plane through the Northbound Interface (NBI).
Therefore, plane of controller can modify the network resources activities automatically.
The global abstract network view of the network resources are accessed by the programs in the
application plane, for their purposes of internal decision-making, given by the controller plane of
SDN by using models of data displayed via the Northbound Interface.
ETSI has created different standards, the one provided below is one of the most
important, which illustrates how the NFVI help us to decouple the hardware and
software.
NFV blocks are shown in Figure #2. It can be divided into four layers:
It has two subsections Virtual Network Function (VNF) and Element Management System (EMS)
A Virtual Network Function (VNF) is the basic block in NFV Architecture. It virtualized network
function. e.g. when a router is virtualized, we call it Router VNF and when a base station is virtual we
call it as base station VNF, similarly, it can be DHCP server VNF and Firewall VNF. Even when one sub-
function of a network element is virtualized, it is called VNF. For example in Evolved Packet Corer
case, various sub-functions like MME, Gateways, and HSS can be separate VNFs which together
function as virtual EPC.
A VNFs are deployed on Virtual Machines (VMs). A VNF can be deployed on multiple VMs where each
VM hosts a single function of VNF. However, the whole VNF can also be deployed be on a single VM as
well.
Element Management System (EMS) is responsible for the functional management of VNF. The
management functions include Fault, Configuration, Accounting, Performance and Security
Management. An EMS may manage the VNFs through proprietary interfaces. There may be one EMS
per VNF or one EMS that can manage multiple VNFs. EMS itself can be deployed as Virtual Network
Function (VNF).
NFV Infrastructure is the totality of hardware and software components which build up the
environment in which VNFs are deployed, managed and executed. NFV infrastructure physically can
span across several locations, the network provides connectivity between these locations to be part of
NFV infrastructure.
Hardware Resources
Virtualization Layer
Virtual Resources
From VNF point of view, the virtualization layer and hardware resources shall be a single entity
providing it the desired resource.
Hardware Resource includes computing, storage and network the provides processing, storage and
connectivity to VNFs through virtualization (hypervisor) layer. Computing and storage resources are
commonly used in a pool.The network resource comprises of switching functions e.g. router, wired or
wireless network.
Virtualization Layer also known as a hypervisor, it abstracts the hardware resources and decouples
the VNF software from the underlying hardware to ensure a hardware independent life cycle for VNFs.
It is mainly responsible for following:
The virtualization layer in middle ensures VNFs are decoupled from hardware resource and therefore
software can be deployed on different physical resources.
Virtual Resources
Virtualization layer abstracts the computing, storage, and network from hardware layer make
available as Virtual Resources.
OSS/BSS refers to OSS/BSS of an operator. OSS deals with network management, fault management,
configuration management and service management. BSS deals with customer management, product
management and order management etc.
In the NFV architecture, the decoupled BSS/OSS of an operator may be integrated with the NFV
Management and Orchestration using standard interfaces.
Management and Orchestration Layer is also abbreviated as MANO and it includes three components:
MANO interacts with both NFVI and VNF layer. MANO layer manages all the resources in the
infrastructure layer, it also creates and deletes resources and manages their allocation of the VNFs.
Virtualised Infrastructure Manager (VIM) comprises the functionalities that are used to control and
manage the interaction of a VNF with computing, storage and network resources under its authority,
as well as their virtualisation. Virtualised infrastructure Manager performs the following:
VNF Manager is responsible for VNF life cycle management which includes installation, updates,
query, scale up/down and termination. A VNF manager may be deployed for each VNF or a single VNF
manager may be deployed to serve multiple VNFs.
Orchestrator is in charge of the orchestration and management of NFV infrastructure and software
resources and realizing network services
There is one more independent block know as Service, VNF and Infrastructure apart from above
building blocks.This includes data-sets that provide information regarding VNF deployment template,
VNF forwarding graphs, service related information and NFV infrastructure information models.
Virtual Network Functions (VNFs) are virtualized network services running on open
computing platforms formerly carried out by proprietary, dedicated hardware
technology.
Common VNFs include virtualized routers, firewalls, WAN optimization, and network
address translation (NAT) services. Most VNFs are run in virtual machines (VMs) on
common virtualization infrastructure software such as VMWare or KVM.
VNFs can be linked together like building blocks in a process known as service
chaining. Although the concept is not new, service chaining—and the application
provisioning process—is simplified and shortened using VNF technology.
VNFs can help increase network scalability and agility, while also enabling better use of
network infrastructure resources. Other benefits include reducing power consumption
and increasing security and available physical space, since VNFs replace physical
hardware.