0% found this document useful (0 votes)
23 views16 pages

Virtual Networking Performance in OpenStack Platfo

Virtual_networking

Uploaded by

dung9d32k2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views16 pages

Virtual Networking Performance in OpenStack Platfo

Virtual_networking

Uploaded by

dung9d32k2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Hindawi Publishing Corporation

Journal of Electrical and Computer Engineering


Volume 2016, Article ID 5249421, 15 pages
http://dx.doi.org/10.1155/2016/5249421

Research Article
Virtual Networking Performance in OpenStack Platform for
Network Function Virtualization

Franco Callegati, Walter Cerroni, and Chiara Contoli


DEI, University of Bologna, Via Venezia 52, 47521 Cesena, Italy

Correspondence should be addressed to Walter Cerroni; walter.cerroni@unibo.it

Received 19 October 2015; Revised 19 January 2016; Accepted 30 March 2016

Academic Editor: Yan Luo

Copyright © 2016 Franco Callegati et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The emerging Network Function Virtualization (NFV) paradigm, coupled with the highly flexible and programmatic control of
network devices offered by Software Defined Networking solutions, enables unprecedented levels of network virtualization that
will definitely change the shape of future network architectures, where legacy telco central offices will be replaced by cloud data
centers located at the edge. On the one hand, this software-centric evolution of telecommunications will allow network operators to
take advantage of the increased flexibility and reduced deployment costs typical of cloud computing. On the other hand, it will pose
a number of challenges in terms of virtual network performance and customer isolation. This paper intends to provide some insights
on how an open-source cloud computing platform such as OpenStack implements multitenant network virtualization and how it
can be used to deploy NFV, focusing in particular on packet forwarding performance issues. To this purpose, a set of experiments is
presented that refer to a number of scenarios inspired by the cloud computing and NFV paradigms, considering both single tenant
and multitenant scenarios. From the results of the evaluation it is possible to highlight potentials and limitations of running NFV
on OpenStack.

1. Introduction solutions are already widely deployed at different protocol


layers, including Virtual Local Area Networks (VLANs), mul-
Despite the original vision of the Internet as a set of net- tilayer Virtual Private Network (VPN) tunnels over public
works interconnected by distributed layer 3 routing nodes, wide-area interconnections, and Overlay Networks [2].
nowadays IP datagrams are not simply forwarded to their Today the combination of emerging technologies such as
final destination based on IP header and next-hop informa- Network Function Virtualization (NFV) and Software Defined
tion. A number of so-called middle-boxes process IP traffic Networking (SDN) promises to bring innovation one step
performing cross layer tasks such as address translation, further. SDN provides a more flexible and programmatic
packet inspection and filtering, QoS management, and load control of network devices and fosters new forms of vir-
balancing. They represent a significant fraction of network tualization that will definitely change the shape of future
operators’ capital and operational expenses. Moreover, they network architectures [3], while NFV defines standards to
are closed systems, and the deployment of new communi- deploy software-based building blocks implementing highly
cation services is strongly dependent on the product capa- flexible network service chains capable of adapting to the
bilities, causing the so-called “vendor lock-in” and Internet rapidly changing user requirements [4].
“ossification” phenomena [1]. A possible solution to this As a consequence, it is possible to imagine a medium-
problem is the adoption of virtualized middle-boxes based term evolution of the network architectures where middle-
on open software and hardware solutions. Network virtual- boxes will turn into virtual machines (VMs) implementing
ization brings great advantages in terms of flexible network network functions within cloud computing infrastructures,
management, performed at the software level, and possible and telco central offices will be replaced by data centers
coexistence of multiple customers sharing the same physical located at the edge of the network [5–7]. Network operators
infrastructure (i.e., multitenancy). Network virtualization will take advantage of the increased flexibility and reduced
2 Journal of Electrical and Computer Engineering

deployment costs typical of the cloud-based approach, paving 2. Cloud Network Virtualization
the way to the upcoming software-centric evolution of
telecommunications [8]. However, a number of challenges Generally speaking network virtualization is not a new con-
must be dealt with, in terms of system integration, data cept. Virtual LANs, Virtual Private Networks, and Overlay
center management, and packet processing performance. For Networks are examples of virtualization techniques already
instance, if VLANs are used in the physical switches and in widely used in networking, mostly to achieve isolation of
the virtual LANs within the cloud infrastructure, a suitable traffic flows and/or of whole network sections, either for
integration is necessary, and the coexistence of different security or for functional purposes such as traffic engineering
IP virtual networks dedicated to multiple tenants must be and performance optimization [2].
seamlessly guaranteed with proper isolation. Upon considering cloud computing infrastructures the
Then a few questions are naturally raised: Will cloud concept of network virtualization evolves even further. It
computing platforms be actually capable of satisfying the is not just that some functionalities can be configured in
requirements of complex communication environments such physical devices to obtain some additional functionality in
as the operators edge networks? Will data centers be able virtual form. In cloud infrastructures whole parts of the
to effectively replace the existing telco infrastructures at the network are virtual, implemented with software devices
edge? Will virtualized networks provide performance compa- and/or functions running within the servers. This new
rable to those achieved with current physical networks, or will “softwarized” network implementation scenario allows novel
they pose significant limitations? Indeed the answer to this network control and management paradigms. In particular,
question will be a function of the cloud management platform the synergies between NFV and SDN offer programmatic
considered. In this work the focus is on OpenStack, which capabilities that allow easily defining and flexibly managing
is among the state-of-the-art Linux-based virtualization and multiple virtual network slices at levels not achievable before
cloud management tools. Developed by the open-source soft- [1].
ware community, OpenStack implements the Infrastructure- In cloud networking the typical scenario is a set of
as-a-Service (IaaS) paradigm in a multitenant context VMs dedicated to a given tenant, able to communicate with
[9]. each other as if connected to the same Local Area Network
To the best of our knowledge, not much work has been (LAN), independently of the physical server/servers they are
reported about the actual performance limits of network running on. The VMs and LAN of different tenants have
virtualization in OpenStack cloud infrastructures under the to be isolated and should communicate with the outside
NFV scenario. Some authors assessed the performance of world only through layer 3 routing and filtering devices. From
Linux-based virtual switching [10, 11], while others inves- such requirements stem two major issues to be addressed
tigated network performance in public cloud services [12]. in cloud networking: (i) integration of any set of virtual
Solutions for low-latency SDN implementation on high- networks defined in the data center physical switches with the
performance cloud platforms have also been developed [13]. specific virtual network technologies adopted by the hosting
However, none of the above works specifically deals with servers and (ii) isolation among virtual networks that must
NFV scenarios on OpenStack platform. Although some be logically separated because of being dedicated to different
mechanisms for effectively placing virtual network functions purposes or different customers. Moreover these problems
within an OpenStack cloud have been presented [14], a should be solved with performance optimization in mind, for
detailed analysis of their network performance has not been instance, aiming at keeping VMs with intensive exchange of
provided yet. data colocated in the same server, keeping local traffic inside
This paper aims at providing insights on how the Open- the host and thus reducing the need for external network
Stack platform implements multitenant network virtual- resources and minimizing the communication latency.
ization, focusing in particular on the performance issues, The solution to these issues is usually fully supported
trying to fill a gap that is starting to get the attention by the VM manager (i.e., the Hypervisor) running on the
also from the OpenStack developer community [15]. The hosting servers. Layer 3 routing functions can be executed by
paper objective is to identify performance bottlenecks in the taking advantage of lightweight virtualization tools, such as
cloud implementation of the NFV paradigms. An ad hoc Linux containers or network namespaces, resulting in isolated
set of experiments were designed to evaluate the OpenStack virtual networks with dedicated network stacks (e.g., IP
performance under critical load conditions, in both single routing tables and netfilter flow states) [18]. Similarly layer
tenant and multitenant scenarios. The results reported in 2 switching is typically implemented by means of kernel-
this work extend the preliminary assessment published in level virtual bridges/switches interconnecting a VM’s virtual
[16, 17]. interface to a host’s physical interface. Moreover the VMs
The paper is structured as follows: the network virtual- placing algorithms may be designed to take networking
ization concept in cloud computing infrastructures is further issues into account thus optimizing the networking in the
elaborated in Section 2; the OpenStack virtual network cloud together with computation effectiveness [19]. Finally
architecture is illustrated in Section 3; the experimental test- it is worth mentioning that whatever network virtualization
bed that we have deployed to assess its performance is technology is adopted within a data center, it should be
presented in Section 4; the results obtained under different compatible with SDN-based implementation of the control
scenarios are discussed in Section 5; some conclusions are plane (e.g., OpenFlow) for improved manageability and
finally drawn in Section 6. programmability [20].
Journal of Electrical and Computer Engineering 3

Cloud
customers
Internet

External network

Controller node Network node Compute


p node 1 Compute node 2 Storage
g node

Instance/tunnel (data) network

Management network

Figure 1: Main components of an OpenStack cloud setup.

For the purposes of this work the implementation of center infrastructure with a high level of transparency. As
layer 2 connectivity in the cloud environment is of particular illustrated in Figure 1, a typical OpenStack cloud is composed
relevance. Many Hypervisors running on Linux systems of a number of physical nodes and networks:
implement the LANs inside the servers using Linux Bridge,
the native kernel bridging module [21]. This solution is (i) Controller node: managing the cloud platform.
straightforward and is natively integrated with the powerful (ii) Network node: hosting the networking services for the
Linux packet filtering and traffic conditioning kernel func- various tenants of the cloud and providing external
tions. The overall performance of this solution should be at connectivity.
a reasonable level when the system is not overloaded [22]. (iii) Compute nodes: as many hosts as needed in the cluster
The Linux Bridge basically works as a transparent bridge to execute the VMs.
with MAC learning, providing the same functionality as a
standard Ethernet switch in terms of packet forwarding. But (iv) Storage nodes: to store data and VM images.
such standard behavior is not compatible with SDN and is not (v) Management network: the physical networking infras-
flexible enough when aspects such as multitenant traffic iso- tructure used by the controller node to manage
lation, transparent VM mobility, and fine-grained forward- the OpenStack cloud services running on the other
ing programmability are critical. The Linux-based bridging nodes.
alternative is Open vSwitch (OVS), a software switching (vi) Instance/tunnel network (or data network): the phys-
facility specifically designed for virtualized environments and ical network infrastructure connecting the network
capable of reaching kernel-level performance [23]. OVS is node and the compute nodes, to deploy virtual tenant
also OpenFlow-enabled and therefore fully compatible and networks and allow inter-VM traffic exchange and
integrated with SDN solutions. VM connectivity to the cloud networking services
running in the network node.
3. OpenStack Virtual Network Infrastructure (vii) External network: the physical infrastructure enabling
connectivity outside the data center.
OpenStack provides cloud managers with a web-based dash-
board as well as a powerful and flexible Application Pro- OpenStack has a component specifically dedicated to
grammable Interface (API) to control a set of physical hosting network service management: this component, formerly
servers executing different kinds of Hypervisors (in general, known as Quantum, was renamed as Neutron in the Havana
OpenStack is designed to manage a number of computers, release. Neutron decouples the network abstractions from the
hosting application servers: these application servers can actual implementation and provides administrators and users
be executed by fully fledged VMs, lightweight containers, with a flexible interface for virtual network management.
or bare-metal hosts; in this work we focus on the most The Neutron server is centralized and typically runs in the
challenging case of application servers running on VMs) and controller node. It stores all network-related information
to manage the required storage facilities and virtual network and implements the virtual network infrastructure in a
infrastructures. The OpenStack dashboard also allows instan- distributed and coordinated way. This allows Neutron to
tiating computing and networking resources within the data transparently manage multitenant networks across multiple
4 Journal of Electrical and Computer Engineering

compute nodes and to provide transparent VM mobility to remote data centers [25]. The examples shown in Figures
within the data center. 2 and 3 refer to the case of tenant isolation implemented
Neutron’s main network abstractions are with GRE tunnels on the instance/tunnel network. Whatever
virtualization technology is used in the physical network,
(i) network, a virtual layer 2 segment; its virtual networks must be mapped into the VLANs used
(ii) subnet, a layer 3 IP address space used in a network; internally by Neutron to achieve isolation. This is performed
(iii) port, an attachment point to a network and to one or by taking advantage of the programmable features available
more subnets on that network; in OVS through the insertion of appropriate OpenFlow
mapping rules in br-int and br-tun.
(iv) router, a virtual appliance that performs routing Virtual bridges are interconnected by means of either
between subnets and address translation; virtual Ethernet (veth) pairs or patch port pairs, consisting
(v) DHCP server, a virtual appliance in charge of IP of two virtual interfaces that act as the endpoints of a pipe:
address distribution; anything entering one endpoint always comes out on the
(vi) security group, a set of filtering rules implementing a other side.
cloud-level firewall. From the networking point of view the creation of a new
VM instance involves the following steps:
A cloud customer wishing to implement a virtual infras-
tructure in the cloud is considered an OpenStack tenant and (i) The OpenStack scheduler component running in the
can use the OpenStack dashboard to instantiate computing controller node chooses the compute node that will
and networking resources, typically creating a new network host the VM.
and the necessary subnets, optionally spawning the related (ii) A tap interface is created for each VM network
DHCP servers, then starting as many VM instances as interface to connect it to the Linux kernel.
required based on a given set of available images, and speci-
fying the subnet (or subnets) to which the VM is connected. (iii) A Linux Bridge dedicated to each VM network inter-
Neutron takes care of creating a port on each specified subnet face is created (in Figure 3 two of them are shown)
(and its underlying network) and of connecting the VM to and the corresponding tap interface is attached to it.
that port, while the DHCP service on that network (resident (iv) A veth pair connecting the new Linux Bridge to the
in the network node) assigns a fixed IP address to it. Other integration bridge is created.
virtual appliances (e.g., routers providing global connectivity)
can be implemented directly in the cloud platform, by means The veth pair clearly emulates the Ethernet cable that would
of containers and network namespaces typically defined in connect the two bridges in real life. Nonetheless, why the
the network node. The different tenant networks are isolated new Linux Bridge is needed is not intuitive, as the VM’s tap
by means of VLANs and network namespaces, whereas the interface could be directly attached to br-int. In short, the
security groups protect the VMs from external attacks or reason is that the antispoofing rules currently implemented
unauthorized access. When some VM instances offer services by Neutron adopt the native Linux kernel filtering functions
that must be reachable by external users, the cloud provider (netfilter) applied to bridged tap interfaces, which work only
defines a pool of floating IP addresses on the external under Linux Bridges. Therefore, the Linux Bridge is required
network and configures the network node with VM-specific as an intermediate element to interconnect the VM to the
forwarding rules based on those floating addresses. integration bridge. The security rules are applied to the Linux
OpenStack implements the virtual network infrastruc- Bridge on the tap interface that connects the kernel-level
ture (VNI) exploiting multiple virtual bridges connecting bridge to the virtual Ethernet port of the VM running in user-
virtual and/or physical interfaces that may reside in different space.
network namespaces. To better understand such a complex
system, a graphical tool was developed to display all the 4. Experimental Setup
network elements used by OpenStack [24]. Two examples,
showing the internal state of a network node connected to The previous section makes the complexity of the OpenStack
three virtual subnets and a compute node running two VMs, virtual network infrastructure clear. To understand optimal
are displayed in Figures 2 and 3, respectively. design strategies in terms of network performance it is of
Each node runs OVS-based integration bridge named great importance to analyze it under critical traffic conditions
br-int and, connected to it, an additional OVS bridge for and assess the maximum sustainable packet rate under
each data center physical network attached to the node. different application scenarios. The goal is to isolate as much
So the network node (Figure 2) includes br-tun for the as possible the level of performance of the main OpenStack
instance/tunnel network and br-ex for the external network. network components and determine where the bottlenecks
A compute node (Figure 3) includes br-tun only. are located, speculating on possible improvements. To this
Layer 2 virtualization and multitenant isolation on the purpose, a test-bed including a controller node, one or two
physical network can be implemented using either VLANs compute nodes (depending on the specific experiment), and
or layer 2-in-layer 3/4 tunneling solutions, such as Virtual a network node was deployed and used to obtain the results
eXtensible LAN (VXLAN) or Generic Routing Encapsulation presented in the following. In the test-bed each compute node
(GRE), which allow extending the local virtual networks also runs KVM, the native Linux VM Hypervisor, and is equipped
Journal of Electrical and Computer Engineering 5

Physical interface
External network
OVS bridge
eth0
l2tp tunnel

VLAN alias
External router interface qg-9326d793-0f br-ex(bridge) br-ex
Linux Bridge

TUN/TAP

phy-br-ex OVS-internal
Subnet 1 router interface qr-dcaace0c-ab
GRE tunnel
int-br-ex
Subnet 1 DHCP server tapf9c1bdb7-55
Patch port

veth pair
br-int(bridge) br-int
LinBr mgmgt iface
Subnet 2 router interface qr-6df34d1e-10
Other OVS ports
Subnet 2 DHCP server tapf2027a28-f0 patch-tun

patch-int

Subnet 3 router interface qr-decba8c2-53

Subnet 3 DHCP server tapc6e53a07-fe br-tun(bridge) br-tun

gre-0a7d0001

eth2 eth1

Management Instance/tunnel
network network

Figure 2: Network elements in an OpenStack network node connected to three virtual subnets. Three OVS bridges (red boxes) are
interconnected by patch port pairs (orange boxes). br-ex is directly attached to the external network physical interface (eth0), whereas GRE
tunnel is established on the instance/tunnel network physical interface (eth1) to connect br-tun with its counterpart in the compute node. A
number of br-int ports (light-green boxes) are connected to four virtual router interfaces and three DHCP servers. An additional physical
interface (eth2) connects the network node to the management network.

with 8 GB of RAM and a quad-core processor enabled to In some cases, the Iperf3 tool was also added to generate
hyperthreading, resulting in 8 virtual CPUs. background traffic at a fixed data rate [27]. All physical
The test-bed was configured to implement three possible interfaces involved in the experiments were Gigabit Ethernet
use cases: network cards.
(1) A typical single tenant cloud computing scenario.
(2) A multitenant NFV scenario with dedicated network 4.1. Single Tenant Cloud Computing Scenario. This is the
functions. typical configuration where a single tenant runs one or
(3) A multitenant NFV scenario with shared network multiple VMs that exchange traffic with one another in
functions. the cloud or with an external host, as shown in Figure 4.
This is a rather trivial case of limited general interest but
For each use case multiple experiments were executed as is useful to assess some basic concepts and pave the way
reported in the following. In the various experiments typ- to the deeper analysis developed in the second part of this
ically a traffic source sends packets at increasing rate to section. In the experiments reported, as mentioned above, the
a destination that measures the received packet rate and virtualization Hypervisor was always KVM. A scenario with
throughput. To this purpose the RUDE & CRUDE tool was OpenStack running the cloud environment and a scenario
used, for both traffic generation and measurement [26]. without OpenStack were considered to assess some general
6 Journal of Electrical and Computer Engineering

VM1 interface VM2 interface

tap03b80be1-55 tapfe856de0-44

qbr03b80be1-55 qbr03b80be1-55(bridge) qbrfe856de0-44(bridge) qbrfe856de0-44

qvb03b80be1-55 qvbfe856de0-44

qvo03b80be1-55 qvofe856de0-44

br-int(bridge) br-int

patch-tun

patch-int

br-tun(bridge) br-tun

gre-0a7d0002

eth3 eth0

Management Instance/tunnel
network network

Figure 3: Network elements in an OpenStack compute node running two VMs. Two Linux Bridges (blue boxes) are attached to the VM tap
interfaces (green boxes) and connected by virtual Ethernet pairs (light-blue boxes) to br-int.

comparison and allow a first isolation of the performance (2) Non-OpenStack scenario: it adopts physical hosts
degradation due to the individual building blocks, in par- running Linux-Ubuntu server and KVM Hypervisor,
ticular Linux Bridge and OVS. The experiments report the using either OVS or Linux Bridge as a virtual switch.
following cases: The following setups were tested:
(1) OpenStack scenario: it adopts the standard OpenStack
cloud platform, as described in the previous section, (2.1) One physical host executing two colocated
with two VMs, respectively, acting as sender and VMs, acting as sender and receiver and directly
receiver. In particular, the following setups were connected to the same Linux Bridge.
tested: (2.2) The same setup as the previous one, but with
(1.1) A single compute node executing two colocated OVS bridge instead of a Linux Bridge.
VMs. (2.3) Two physical hosts: one executing the sender
(1.2) Two distinct compute nodes, each executing a VM connected to an internal OVS and the other
VM. natively acting as the receiver.
Journal of Electrical and Computer Engineering 7

The implementation of the test scenarios has been done


Single tenant following the OpenStack architecture. The compute nodes
Customer VM of the cluster run the VMs, while the network node runs
the virtual router within a dedicated network namespace. All
Virtual switch layer 2 connections are implemented by a virtual switch (with
proper VLAN isolation) distributed in both the compute and
network nodes. Figure 6 shows the view provided by the
OpenStack dashboard, in the case of 4 tenants simultaneously
active, which is the one considered for the numerical results
presented in the following. The choice of 4 tenants was made
to provide meaningful results with an acceptable degree of
complexity, without lack of generality. As results show this is
enough to put the hardware resources of the compute node
External host under stress and therefore evaluate performance limits and
critical issues.
Figure 4: Reference logical architecture of a single tenant virtual It is very important to outline that the VM setup shown
infrastructure with 5 hosts: 4 hosts are implemented as VMs in in Figure 5 is not commonly seen in a traditional cloud
the cloud and are interconnected via the OpenStack layer 2 virtual computing environment. The VMs usually behave as single
infrastructure; the 5th host is implemented by a physical machine hosts connected as endpoints to one or more virtual net-
placed outside the cloud but still connected to the same logical LAN. works, with one single network interface and no pass-through
forwarding duties. In NFV the virtual network functions
(VNFs) often perform actions that require packet forwarding.
Tenant N Network Address Translators (NATs), Deep Packet Inspectors
··· (DPIs), and so forth all belong to this category. If such
VNFs are hosted in VMs the result is that VMs in the
Tenant 2
OpenStack infrastructure must be allowed to perform packet
Tenant 1 forwarding which goes against the typical rules implemented
for security reasons in OpenStack. For instance, when a
new VM is instantiated it is attached to a Linux Bridge to
Customer VM DPI Virtual which filtering rules are applied with the goal of avoiding
router that the VM sends packet with MAC and IP addresses
that are not the ones allocated to the VM itself. Clearly
Figure 5: Multitenant NFV scenario with dedicated network func-
tions tested on the OpenStack platform. this is an antispoofing rule that makes perfect sense in a
normal networking environment but impairs the forwarding
of packets originated by another VM as is the case of the NFV
scenario. In the scenario considered here, it was therefore
4.2. Multitenant NFV Scenario with Dedicated Network Func- necessary to permanently modify the filtering rules in the
tions. The multitenant scenario we want to analyze is inspired Linux Bridges, by allowing, within each tenant slice, packets
by a simple NFV case study, as illustrated in Figure 5: each coming from or directed to the customer VM’s IP address to
tenant’s service chain consists of a customer-controlled VM pass through the Linux Bridges attached to the DPI virtual
followed by a dedicated deep packet inspection (DPI) virtual appliance. Similarly the virtual router is usually connected
appliance and a conventional gateway (router) connecting the just to one LAN. Therefore its NAT function is configured
customer LAN to the public Internet. The DPI is deployed for a single pool of addresses. This was also modified and
by the service operator as a separate VM with two network adapted to serve the whole set of internal networks used in
interfaces, running a traffic monitoring application based on the multitenant setup.
the nDPI library [28]. It is assumed that the DPI analyzes
the traffic profile of the customers (source and destination IP
addresses and ports, application protocol, etc.) to guarantee 4.3. Multitenant NFV Scenario with Shared Network Func-
the matching with the customer service level agreement tions. We finally extend our analysis to a set of multitenant
(SLA), a practice that is rather common among Internet scenarios assuming different levels of shared VNFs, as illus-
service providers to enforce network security and traffic trated in Figure 7. We start with a single VNF, that is, the
policing. The virtualization approach executing the DPI in virtual router connecting all tenants to the external network
a VM makes it possible to easily configure and adapt the (Figure 7(a)). Then we progressively add a shared DPI
inspection function to the specific tenant characteristics. For (Figure 7(b)), a shared firewall/NAT function (Figure 7(c)),
this reason every tenant has its own DPI with dedicated con- and a shared traffic shaper (Figure 7(d)). The rationale behind
figuration. On the other hand the gateway has to implement this last group of setups is to evaluate how NFV deployment
a standard functionality and is shared among customers. It on top of an OpenStack compute node performs under a
is implemented as a virtual router for packet forwarding and realistic multitenant scenario where traffic flows must be
NAT operations. processed by a chain of multiple VNFs. The complexity of the
8 Journal of Electrical and Computer Engineering

10.0.3.0/24
DPInet3

10.1.4.0/24
InVMnet4

10.0.6.0/24
DPInet6

10.1.5.0/24
InVMnet5

10.0.5.0/24
DPInet5

10.1.3.0/24
InVMnet3

10.1.6.0/24
InVMnet6

10.0.4.0/24
DPInet4

10.250.0.0/24
pub

Figure 6: The OpenStack dashboard shows the tenants virtual networks (slices). Each slice includes VM connected to an internal network
(InVMnet𝑖) and a second VM performing DPI and packet forwarding between InVMnet𝑖 and DPInet𝑖. Connectivity with the public Internet
is provided for all by the virtual router in the bottom-left corner.

virtual network path inside the compute node for the VNF same hardware configuration used in the cluster of the cloud
chaining of Figure 7(d) is displayed in Figure 8. The peculiar platform.
nature of NFV traffic flows is clearly shown in the figure, The former host acts as traffic generator while the latter
where packets are being forwarded multiple times across br- acts as traffic sink. The aim is to verify and assess the
int as they enter and exit the multiple VNFs running in the maximum throughput and sustainable packet rate of the
compute node. hardware platform used for the experiments. Packet flows
ranging from 103 to 105 packets per second (pps), for both
64- and 1500-byte IP packet sizes, were generated.
5. Numerical Results For 1500-byte packets, the throughput saturates to about
970 Mbps at 80 Kpps. Given that the measurement does not
5.1. Benchmark Performance. Before presenting and dis- consider the Ethernet overhead, this limit is clearly very close
cussing the performance of the study scenarios described to the 1 Gbps which is the physical limit of the Ethernet
above, it is important to set some benchmark as a reference interface. For 64-byte packets, the results are different since
for comparison. This was done by considering a back-to- the maximum measured throughput is about 150 Mbps.
back (B2B) connection between two physical hosts, with the Therefore the limiting factor is not the Ethernet bandwidth
Journal of Electrical and Computer Engineering 9

Tenant N Tenant N
··· ···
Tenant 2 Tenant 2
Tenant 1 Tenant 1

DPI Virtual
Customer VM Virtual Customer VM
router router

(a) Single VNF (b) Two VNFs’ chaining

Tenant N
···
Tenant 2

Tenant 1

DPI Firewall Virtual


Customer VM NAT router

(c) Three VNFs’ chaining

Tenant N

···
Tenant 2

Tenant 1

DPI Firewall Traffic Virtual


Customer VM NAT shaper router

(d) Four VNFs’ chaining

Figure 7: Multitenant NFV scenario with shared network functions tested on the OpenStack platform.

but the maximum sustainable packet processing rate of the setups (1.1) and (1.2) with the B2B case. The figure shows
computer node. These results are shown in Figure 9. that the different networking configurations play a crucial
This latter limitation, related to the processing capabilities role in performance. Setup (1.1) with the two VMs colocated
of the hosts, is not very relevant to the scopes of this work. in the same compute node clearly is more demanding since
Indeed it is always possible, in a real operation environment, the compute node has to process the workload of all the
to deploy more powerful and better dimensioned hardware. components shown in Figure 3, that is, packet generation and
This was not possible in this set of experiments where the reception in two VMs and layer 2 switching in two Linux
cloud cluster was an existing research infrastructure which Bridges and two OVS bridges (as a matter of fact the packets
could not be modified at will. Nonetheless the objective are both outgoing and incoming at the same time within the
here is to understand the limitations that emerge as a same physical machine). The performance starts deviating
consequence of the networking architecture, resulting from from the B2B case at around 20 Kpps, with a saturating effect
the deployment of the VNFs in the cloud, and not of the starting at 30 Kpps. This is the maximum packet processing
specific hardware configuration. For these reasons as well as capability of the compute node, regardless of the physical
for the sake of brevity, the numerical results presented in the networking capacity, which is not fully exploited in this
following mostly focus on the case of 1500-byte packet length, particular scenario where the traffic flow does not leave the
which will stress the network more than the hosts in terms of physical host. Setup (1.2) splits the workload over two phys-
performance. ical machines and the benefit is evident. The performance is
almost ideal, with a very little penalty due to the virtualization
5.2. Single Tenant Cloud Computing Scenario. The first series overhead.
of results is related to the single tenant scenario described in These very simple experiments lead to an important
Section 4.1. Figure 10 shows the comparison of OpenStack conclusion that motivates the more complex experiments
10 Journal of Electrical and Computer Engineering

VM DPI DPI FW/NAT FW/NAT Tr. shaper Tr. shaper


interface interface 1 interface 2 interface 1 interface 2 interface 1 interface 2
tap4345870b-63 tap77f8a413-49 tapf0b115c8-48 tap17e04cf5-94 tap7e8dfe63-c6 tapaf24b66d-15 tap6b9d95d4-95

qbr4345870b-63(bridge) qbr77f8a413-49(bridge) qbrf0b115c8-48(bridge) qbr17e04cf5-94(bridge) qbr7e8dfe63-c6(bridge) qbraf24b66d-15(bridge) qbr6b9d95d4-95(bridge)

qbr4345870b-63 qbr77f8a413-49 qbrf0b115c8-48 qbr17e04cf5-94 qbr7e8dfe63-c6 qbraf24b66d-15 qbr6b9d95d4-95

qvb4345870b-63 qvb77f8a413-49 qvbf0b115c8-48 qvb17e04cf5-94 qvb7e8dfe63-c6 qvbaf24b66d-15 qvb6b9d95d4-95

qvo4345870b-63 qvo77f8a413-49 qvof0b115c8-48 qvo17e04cf5-94 qvo7e8dfe63-c6 qvoaf24b66d-15 qvo6b9d95d4-95

br-int(bridge)
br-int

patch-tun

patch-int

br-tun
br-tun(bridge)

gre-0a7d0005gre-0a7d0002gre-0a7d0004
eth3 eth0

Figure 8: A view of the OpenStack compute node with the tenant VM and the VNFs installed including the building blocks of the virtual
network infrastructure. The red dashed line shows the path followed by the packets traversing the VNF chain displayed in Figure 7(d).

1000 80

70
800
Throughput received (Mbps)

60
Traffic received (Kpps)

50
600
40

400 30

20
200
10

0
0 0 10 20 30 40 50 60 70 80 90 100
0 10 20 30 40 50 60 70 80 90 100
Traffic generated (Kpps)
Traffic generated (Kpps)
B2B
Ideal, 1500 bytes
2 VMs in 2 compute nodes
B2B, 1500 bytes
2 VMs in 1 compute node
B2B, 64 bytes
Figure 10: Received versus generated packet rate in the OpenStack
Figure 9: Throughput versus generated packet rate in the B2B setup scenario setups (1.1) and (1.2), with 1500-byte packets.
for 64- and 1500-byte packets. Comparison with ideal 1500-byte
packet throughput.

of the typical OpenStack infrastructure. These are called Non-


that follow: the standard OpenStack virtual network imple- OpenStack scenarios in the following.
mentation can show significant performance limitations. For Setups (2.1) and (2.2) compare Linux Bridge, OVS, and
this reason the first objective was to investigate where the B2B, as shown in Figure 11. The graphs show interesting and
possible bottleneck is, by evaluating the performance of the important results that can be summarized as follows:
virtual network components in isolation. This cannot be
done with OpenStack in action; therefore ad hoc virtual (i) The introduction of some virtual network component
networking scenarios were implemented deploying just parts (thus introducing the processing load of the physical
Journal of Electrical and Computer Engineering 11

80 80

70 70

60 60
Traffic received (Kpps)

Traffic received (Kpps)


50 50

40 40

30 30

20 20

10 10

0 0
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Traffic generated (Kpps) Traffic generated (Kpps)
B2B B2B
2 VMs with OVS OVS with sender VM only
2 VMs with LB
Figure 12: Received versus generated packet rate in the Non-
Figure 11: Received versus generated packet rate in the Non- OpenStack scenario setup (2.3), with 1500-byte packets.
OpenStack scenario setups (2.1) and (2.2), with 1500-byte packets.

100
hosts in the equation) is always a cause of perfor- 90
Traffic received per tenant (Kpps)

mance degradation but with very different degrees of 80


magnitude depending on the virtual network compo- 70
nent.
60
(ii) OVS introduces a rather limited performance degra-
50
dation at very high packet rate with a loss of some
percent. 40

(iii) Linux Bridge introduces a significant performance 30


degradation starting well before the OVS case and 20
leading to a loss in throughput as high as 50%. 10
The conclusion of these experiments is that the presence of 0
0 10 20 30 40 50 60 70 80 90 100
additional Linux Bridges in the compute nodes is one of the
main reasons for the OpenStack performance degradation. Traffic generated per tenant (Kpps)
Results obtained from testing setup (2.3) are displayed in Single tenant 3 tenants, T3
Figure 12 confirming that with OVS it is possible to reach 2 tenants, T1 4 tenants, T1
performance comparable with the baseline. 2 tenants, T2 4 tenants, T2
3 tenants, T1 4 tenants, T3
3 tenants, T2 4 tenants, T4
5.3. Multitenant NFV Scenario with Dedicated Network Func-
tions. The second series of experiments was performed with Figure 13: Received versus generated packet rate for each tenant (T1,
reference to the multitenant NFV scenario with dedicated T2, T3, and T4), for different numbers of active tenants, with 1500-
network functions described in Section 4.2. The case study byte IP packet size.
considers that different numbers of tenants are hosted in the
same compute node, sending data to a destination outside the
LAN, therefore beyond the virtual gateway. Figure 13 shows about 20 Kpps for 4 tenants, that is, when the total packet rate
the packet rate actually received at the destination for each is slightly more than 80 Kpps, corresponding to 1 Gbps.
tenant, for different numbers of simultaneously active tenants In this case it is worth investigating what happens for
with 1500-byte IP packet size. In all cases the tenants generate small packets, therefore putting more pressure on the pro-
the same amount of traffic, resulting in as many overlapping cessing capabilities of the compute node. Figure 14 reports
curves as the number of active tenants. All curves grow the 64-byte packet size case. As discussed previously in
linearly as long as the generated traffic is sustainable, and this case the performance saturation is not caused by the
then they saturate. The saturation is caused by the physical physical bandwidth limit, but by the inability of the hardware
bandwidth limit imposed by the Gigabit Ethernet interfaces platform to cope with the packet processing workload (in fact
involved in the data transfer. In fact, the curves become flat the single compute node has to process the workload of all
as soon as the packet rate reaches about 80 Kpps for 1 tenant, the components involved, including packet generation and
about 40 Kpps for 2 tenants, about 27 Kpps for 3 tenants, and DPI in the VMs of each tenant, as well as layer 2 packet
12 Journal of Electrical and Computer Engineering

100 100
90 90

Total throughput received (Mbps)


Traffic received per tenant (Kpps)

80 80
70 70
60 60
50 50
40 40
30 30
20 20
10 10
0 0
0 10 20 30 40 50 60 70 80 90 100 0 50 100 150 200 250 300 350 400
Traffic generated per tenant (Kpps) Total traffic generated (Kpps)

Single tenant 3 tenants, T3 3 tenants (LB bypass) 3 tenants


2 tenants, T1 4 tenants, T1 4 tenants (LB bypass) 4 tenants
2 tenants, T2 4 tenants, T2 2 tenants
3 tenants, T1 4 tenants, T3
3 tenants, T2 4 tenants, T4
Figure 15: Total throughput measured versus total packet rate
generated by 2 to 4 tenants for 64-byte packet size. Comparison
Figure 14: Received versus generated packet rate for each tenant (T1, between normal OpenStack mode and Linux Bridge bypass with 3
T2, T3, and T4), for different numbers of active tenants, with 64-byte and 4 tenants.
IP packet size.

50

processing and switching in three Linux Bridges per tenant


and two OVS bridges). As could be easily expected from 40
Traffic received (Kpps)

the results presented in Figure 9, the virtual network is not


able to use the whole physical capacity. Even in the case of 30
just one tenant, a total bit rate of about 77 Mbps, well below
1 Gbps, is measured. Moreover this penalty increases with the
number of tenants (i.e., with the complexity of the virtual 20
system). With two tenants the curve saturates at a total of
approximately 150 Kpps (75 × 2), with three tenants at a total 10
of approximately 135 Kpps (45 × 3), and with four tenants at
a total of approximately 120 Kpps (30 × 4). This is to say that
0
an increase of one unit in the number of tenants results in a 0 10 20 30 40 50 60 70 80 90 100
decrease of about 10% in the usable overall network capacity Traffic generated (Kpps)
and in a similar penalty per tenant.
Given the results of the previous section, it is likely T1-VR-DEST T1-DPI-FW-VR-DEST
that the Linux Bridges are responsible for most of this T1-DPI-VR-DEST T1-DPI-FW-TS-VR-DEST
performance degradation. In Figure 15 a comparison is pre- Figure 16: Received versus generated packet rate for one tenant
sented between the total throughput obtained under normal (T1) when four tenants are active, with 1500-byte IP packet size and
OpenStack operations and the corresponding total through- different levels of VNF chaining as per Figure 7. DPI: deep packet
put measured in a custom configuration where the Linux inspection; FW: firewall/NAT; TS: traffic shaper; VR: virtual router;
Bridges attached to each VM are bypassed. To implement the DEST: destination.
latter scenario, the OpenStack virtual network configuration
running in the compute node was modified by connecting
each VM’s tap interface directly to the OVS integration reference to the multitenant NFV scenario with shared net-
bridge. The curves show that the presence of Linux Bridges work functions described in Section 4.3. In each experiment,
in normal OpenStack mode is indeed causing performance four tenants are equally generating increasing amounts of
degradation, especially when the workload is high (i.e., with traffic, ranging from 1 to 100 Kpps. Figures 16 and 17 show the
4 tenants). It is interesting to note also that the penalty related packet rate actually received at the destination from tenant
to the number of tenants is mitigated by the bypass, but not T1 as a function of the packet rate generated by T1, for
fully solved. different levels of VNF chaining, with 1500- and 64-byte
IP packet size, respectively. The measurements demonstrate
5.4. Multitenant NFV Scenario with Shared Network Func- that, for the 1500-byte case, adding a single shared VNF (even
tions. The third series of experiments was performed with one that executes heavy packet processing, such as the DPI)
Journal of Electrical and Computer Engineering 13

50 900
800
40

Throughput received (Mbps)


700
Traffic received (Kpps)

600
30
500
400
20
300

10 200
100
0 0
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Traffic generated (Kpps) Traffic generated (Kpps)
T1-VR-DEST T1-DPI-FW-VR-DEST T1-VR-DEST single tenant
T1-DPI-VR-DEST T1-DPI-FW-TS-VR-DEST T1-VR-DEST
Figure 17: Received versus generated packet rate for one tenant T2-DPI-FW-TS-VR-DEST
(T1) when four tenants are active, with 64-byte IP packet size and T3-DPI-FW-TS-VR-DEST
different levels of VNF chaining as per Figure 7. DPI: deep packet T4-DPI-FW-TS-VR-DEST
inspection; FW: firewall/NAT; TS: traffic shaper; VR: virtual router; Figure 18: Received throughput versus generated packet rate for
DEST: destination. each tenant (T1, T2, T3, and T4) when T1 does not traverse the VNF
chain of Figure 7(d), with 1500-byte IP packet size. Comparison
with the single tenant case. DPI: deep packet inspection; FW: fire-
wall/NAT; TS: traffic shaper; VR: virtual router; DEST: destination.
60
does not significantly impact the forwarding performance
of the OpenStack compute node for a packet rate below
50 Kpps (note that the physical capacity is saturated by the 50
Throughput received (Mbps)

flows simultaneously generated from four tenants at around


20 Kpps, similarly to what happens in the dedicated VNF 40
case of Figure 13). Then the throughput slowly degrades. In
contrast, when 64-byte packets are generated, even a single 30
VNF can cause heavy performance losses above 25 Kpps,
when the packet rate reaches the sustainability limit of the 20
forwarding capacity of our compute node. Independently of
the packet size, adding another VNF with heavy packet pro- 10
cessing (the firewall/NAT is configured with 40,000 matching
rules) causes the performance to rapidly degrade. This is 0
0 10 20 30 40 50 60 70 80 90 100
confirmed when a fourth VNF is added to the chain, although
Traffic generated (Kpps)
for the 1500-byte case the measured packet rate is the one
that saturates the maximum bandwidth made available by T1-VR-DEST single tenant
the traffic shaper. Very similar performance, which we do not T1-VR-DEST
show here, was measured also for the other three tenants. T2-DPI-FW-TS-VR-DEST
To further investigate the effect of VNF chaining, we T3-DPI-FW-TS-VR-DEST
T4-DPI-FW-TS-VR-DEST
considered the case when traffic generated by tenant T1 is not
subject to VNF chaining (as in Figure 7(a)), whereas flows Figure 19: Received throughput versus generated packet rate for
originated from T2, T3, and T4 are processed by four VNFs each tenant (T1, T2, T3, and T4) when T1 does not traverse the
(as in Figure 7(d)). The results presented in Figures 18 and VNF chain of Figure 7(d), with 64-byte IP packet size. Comparison
19 demonstrate that, owing to the traffic shaping function with the single tenant case. DPI: deep packet inspection; FW: fire-
applied to the other tenants, the throughput of T1 can reach wall/NAT; TS: traffic shaper; VR: virtual router; DEST: destination.
values not very far from the case when it is the only active
tenant, especially for packet rates below 35 Kpps. Therefore, a
smart choice of the VNF chaining and a careful planning of
the cloud platform resources could improve the performance 6. Conclusion
of a given class of priority customers. In the same situation, we
measured the TCP throughput achievable by the four tenants. Network Function Virtualization will completely reshape the
As shown in Figure 20, we can reach the same conclusions as approach of telco operators to provide existing as well as
in the UDP case. novel network services, taking advantage of the increased
14 Journal of Electrical and Computer Engineering

1000 The main outcome of this work is that an open-source


cloud computing platform such as OpenStack can be effec-
TCP throughput received (Mbps)

tively adopted to deploy NFV in network edge data centers


800
replacing legacy telco central offices. However, this solution
poses some limitations to the network performance which
600 are not simply related to the hosting hardware maximum
capacity but also to the virtual network architecture imple-
mented by OpenStack. Nevertheless, our study demonstrates
400
that some of these limitations can be mitigated with a careful
redesign of the virtual network infrastructure and an optimal
200 planning of the virtual network functions. In any case, such
limitations must be carefully taken into account for any
engineering activity in the virtual networking arena.
0
0 20 40 60 80 100 120 Obviously, scaling up the system and distributing the
Time (s) virtual network functions among several compute nodes will
definitely improve the overall performance. However, in this
T1-VR-DEST single tenant case the role of the physical network infrastructure becomes
T1-VR-DEST
critical, and an accurate analysis is required in order to isolate
T2-DPI-FW-TS-VR-DEST
T3-DPI-FW-TS-VR-DEST the contributions of virtual and physical components. We
T4-DPI-FW-TS-VR-DEST plan to extend our study in this direction in our future work,
after properly upgrading our experimental test-bed.
Figure 20: Received TCP throughput for each tenant (T1, T2, T3,
and T4) when T1 does not traverse the VNF chain of Figure 7(d).
Comparison with the single tenant case. DPI: deep packet inspec- Competing Interests
tion; FW: firewall/NAT; TS: traffic shaper; VR: virtual router; DEST:
destination. The authors declare that they have no competing interests.

flexibility and reduced deployment costs of the cloud com- Acknowledgments


puting paradigm. In this work, the problem of evaluating
complexity and performance, in terms of sustainable packet This work was partially funded by EIT ICT Labs, Action Line
rate, of virtual networking in cloud computing infrastruc- on Future Networking Solutions, Activity no. 15270/2015:
tures dedicated to NFV deployment was addressed. An “SDN at the Edges.” The authors would like to thank Mr. Giu-
OpenStack-based cloud platform was considered and deeply liano Santandrea for his contributions to the experimental
analyzed to fully understand the architecture of its virtual setup.
network infrastructure. To this end, an ad hoc visual tool was
also developed that graphically plots the different functional References
blocks (and related interconnections) put in place by Neu-
tron, the OpenStack networking service. Some examples were [1] B. Han, V. Gopalakrishnan, L. Ji, and S. Lee, “Network function
provided in the paper. virtualization: challenges and opportunities for innovations,”
The analysis brought the focus of the performance inves- IEEE Communications Magazine, vol. 53, no. 2, pp. 90–97, 2015.
tigation on the two basic software switching elements natively [2] N. M. Mosharaf Kabir Chowdhury and R. Boutaba, “A survey of
adopted by OpenStack, namely, Linux Bridge and Open network virtualization,” Computer Networks, vol. 54, no. 5, pp.
vSwitch. Their performance was first analyzed in a single 862–876, 2010.
tenant cloud computing scenario, by running experiments on [3] The Open Networking Foundation, Software-Defined Network-
a standard OpenStack setup as well as in ad hoc stand-alone ing: The New Norm for Networks, ONF White Paper, The Open
configurations built with the specific purpose of observing Networking Foundation, 2012.
them in isolation. The results prove that the Linux Bridge is [4] The European Telecommunications Standards Institute, “Net-
the critical bottleneck of the architecture, while Open vSwitch work functions virtualisation (NFV); architectural framework,”
shows an almost optimal behavior. ETSI GS NFV 002, V1.2.1, The European Telecommunications
The analysis was then extended to more complex scenar- Standards Institute, 2014.
ios, assuming a data center hosting multiple tenants deploy- [5] A. Manzalini, R. Minerva, F. Callegati, W. Cerroni, and A.
ing NFV environments. The case studies considered first a Campi, “Clouds of virtual machines in edge networks,” IEEE
simple dedicated deep packet inspection function, followed Communications Magazine, vol. 51, no. 7, pp. 63–70, 2013.
by conventional address translation and routing, and then [6] J. Soares, C. Goncalves, B. Parreira et al., “Toward a telco cloud
a more realistic virtual network function chaining shared environment for service functions,” IEEE Communications
among a set of customers with increased levels of complexity. Magazine, vol. 53, no. 2, pp. 98–106, 2015.
Results about sustainable packet rate and throughput perfor- [7] Open Networking Lab, Central Office Re-Architected as Data-
mance of the virtual network infrastructure were presented center (CORD), ON.Lab White Paper, Open Networking Lab,
and discussed. 2015.
Journal of Electrical and Computer Engineering 15

[8] K. Pretz, “Software already defines our lives—but the impact of [24] G. Santandrea, Show My Network State, 2014, https://sites
SDN will go beyond networking alone,” IEEE. The Institute, vol. .google.com/site/showmynetworkstate.
38, no. 4, p. 8, 2014. [25] R. Jain and S. Paul, “Network virtualization and software
[9] OpenStack Project, http://www.openstack.org. defined networking for cloud computing: a survey,” IEEE
[10] F. Sans and E. Gamess, “Analytical performance evaluation of Communications Magazine, vol. 51, no. 11, pp. 24–31, 2013.
different switch solutions,” Journal of Computer Networks and [26] “RUDE & CRUDE: Real-Time UDP Data Emitter & Collector
Communications, vol. 2013, Article ID 953797, 11 pages, 2013. for RUDE,” http://sourceforge.net/projects/rude/.
[11] P. Emmerich, D. Raumer, F. Wohlfart, and G. Carle, “Perfor- [27] iperf3: a TCP, UDP, and SCTP network bandwidth measure-
mance characteristics of virtual switching,” in Proceedings of the ment tool, https://github.com/esnet/iperf.
3rd International Conference on Cloud Networking (CloudNet [28] nDPI: Open and Extensible LGPLv3 Deep Packet Inspection
’13), pp. 120–125, IEEE, Luxembourg City, Luxembourg, Octo- Library, http://www.ntop.org/products/ndpi/.
ber 2014.
[12] R. Shea, F. Wang, H. Wang, and J. Liu, “A deep investigation
into network performance in virtual machine based cloud
environments,” in Proceedings of the 33rd IEEE Conference on
Computer Communications (INFOCOM ’14), pp. 1285–1293,
IEEE, Ontario, Canada, May 2014.
[13] P. Rad, R. V. Boppana, P. Lama, G. Berman, and M. Jamshidi,
“Low-latency software defined network for high performance
clouds,” in Proceedings of the 10th System of Systems Engineering
Conference (SoSE ’15), pp. 486–491, San Antonio, Tex , USA,
May 2015.
[14] S. Oechsner and A. Ripke, “Flexible support of VNF place-
ment functions in OpenStack,” in Proceedings of the 1st IEEE
Conference on Network Softwarization (NETSOFT ’15), pp. 1–6,
London, UK, April 2015.
[15] G. Almasi, M. Banikazemi, B. Karacali, M. Silva, and J. Tracey,
“Openstack networking: it’s time to talk performance,” in
Proceedings of the OpenStack Summit, Vancouver, Canada, May
2015.
[16] F. Callegati, W. Cerroni, C. Contoli, and G. Santandrea,
“Performance of network virtualization in cloud computing
infrastructures: the Openstack case,” in Proceedings of the 3rd
IEEE International Conference on Cloud Networking (CloudNet
’14), pp. 132–137, Luxemburg City, Luxemburg, October 2014.
[17] F. Callegati, W. Cerroni, C. Contoli, and G. Santandrea, “Per-
formance of multi-tenant virtual networks in OpenStack-based
cloud infrastructures,” in Proceedings of the 2nd IEEE Work-
shop on Cloud Computing Systems, Networks, and Applications
(CCSNA ’14), in Conjunction with IEEE Globecom 2014, pp. 81–
85, Austin, Tex, USA, December 2014.
[18] N. Handigol, B. Heller, V. Jeyakumar, B. Lantz, and N. McK-
eown, “Reproducible network experiments using container-
based emulation,” in Proceedings of the 8th International Con-
ference on Emerging Networking Experiments and Technologies
(CoNEXT ’12), pp. 253–264, ACM, December 2012.
[19] P. Bellavista, F. Callegati, W. Cerroni et al., “Virtual network
function embedding in real cloud environments,” Computer
Networks, vol. 93, part 3, pp. 506–517, 2015.
[20] M. F. Bari, R. Boutaba, R. Esteves et al., “Data center network
virtualization: a survey,” IEEE Communications Surveys &
Tutorials, vol. 15, no. 2, pp. 909–928, 2013.
[21] The Linux Foundation, Linux Bridge, The Linux Foundation,
2009, http://www.linuxfoundation.org/collaborate/workgroups/
networking/bridge.
[22] J. T. Yu, “Performance evaluation of Linux bridge,” in Proceed-
ings of the Telecommunications System Management Conference,
Louisville, Ky, USA, April 2004.
[23] B. Pfaff, J. Pettit, T. Koponen, K. Amidon, M. Casado, and S.
Shenker, “Extending networking into the virtualization layer,” in
Proceedings of the 8th ACM Workshop on Hot Topics in Networks
(HotNets ’09), New York, NY, USA, October 2009.
International Journal of

Rotating
Machinery

International Journal of
The Scientific
Engineering Distributed
Journal of
Journal of

Hindawi Publishing Corporation


World Journal
Hindawi Publishing Corporation Hindawi Publishing Corporation
Sensors
Hindawi Publishing Corporation
Sensor Networks
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Journal of

Control Science
and Engineering

Advances in
Civil Engineering
Hindawi Publishing Corporation Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Submit your manuscripts at


http://www.hindawi.com

Journal of
Journal of Electrical and Computer
Robotics
Hindawi Publishing Corporation
Engineering
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

VLSI Design
Advances in
OptoElectronics
International Journal of

International Journal of
Modelling &
Simulation
Aerospace
Hindawi Publishing Corporation Volume 2014
Navigation and
Observation
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Engineering
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
http://www.hindawi.com Volume 2014

International Journal of
International Journal of Antennas and Active and Passive Advances in
Chemical Engineering Propagation Electronic Components Shock and Vibration Acoustics and Vibration
Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy