Virtual Networking Performance in OpenStack Platfo
Virtual Networking Performance in OpenStack Platfo
Research Article
Virtual Networking Performance in OpenStack Platform for
Network Function Virtualization
Copyright © 2016 Franco Callegati et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The emerging Network Function Virtualization (NFV) paradigm, coupled with the highly flexible and programmatic control of
network devices offered by Software Defined Networking solutions, enables unprecedented levels of network virtualization that
will definitely change the shape of future network architectures, where legacy telco central offices will be replaced by cloud data
centers located at the edge. On the one hand, this software-centric evolution of telecommunications will allow network operators to
take advantage of the increased flexibility and reduced deployment costs typical of cloud computing. On the other hand, it will pose
a number of challenges in terms of virtual network performance and customer isolation. This paper intends to provide some insights
on how an open-source cloud computing platform such as OpenStack implements multitenant network virtualization and how it
can be used to deploy NFV, focusing in particular on packet forwarding performance issues. To this purpose, a set of experiments is
presented that refer to a number of scenarios inspired by the cloud computing and NFV paradigms, considering both single tenant
and multitenant scenarios. From the results of the evaluation it is possible to highlight potentials and limitations of running NFV
on OpenStack.
deployment costs typical of the cloud-based approach, paving 2. Cloud Network Virtualization
the way to the upcoming software-centric evolution of
telecommunications [8]. However, a number of challenges Generally speaking network virtualization is not a new con-
must be dealt with, in terms of system integration, data cept. Virtual LANs, Virtual Private Networks, and Overlay
center management, and packet processing performance. For Networks are examples of virtualization techniques already
instance, if VLANs are used in the physical switches and in widely used in networking, mostly to achieve isolation of
the virtual LANs within the cloud infrastructure, a suitable traffic flows and/or of whole network sections, either for
integration is necessary, and the coexistence of different security or for functional purposes such as traffic engineering
IP virtual networks dedicated to multiple tenants must be and performance optimization [2].
seamlessly guaranteed with proper isolation. Upon considering cloud computing infrastructures the
Then a few questions are naturally raised: Will cloud concept of network virtualization evolves even further. It
computing platforms be actually capable of satisfying the is not just that some functionalities can be configured in
requirements of complex communication environments such physical devices to obtain some additional functionality in
as the operators edge networks? Will data centers be able virtual form. In cloud infrastructures whole parts of the
to effectively replace the existing telco infrastructures at the network are virtual, implemented with software devices
edge? Will virtualized networks provide performance compa- and/or functions running within the servers. This new
rable to those achieved with current physical networks, or will “softwarized” network implementation scenario allows novel
they pose significant limitations? Indeed the answer to this network control and management paradigms. In particular,
question will be a function of the cloud management platform the synergies between NFV and SDN offer programmatic
considered. In this work the focus is on OpenStack, which capabilities that allow easily defining and flexibly managing
is among the state-of-the-art Linux-based virtualization and multiple virtual network slices at levels not achievable before
cloud management tools. Developed by the open-source soft- [1].
ware community, OpenStack implements the Infrastructure- In cloud networking the typical scenario is a set of
as-a-Service (IaaS) paradigm in a multitenant context VMs dedicated to a given tenant, able to communicate with
[9]. each other as if connected to the same Local Area Network
To the best of our knowledge, not much work has been (LAN), independently of the physical server/servers they are
reported about the actual performance limits of network running on. The VMs and LAN of different tenants have
virtualization in OpenStack cloud infrastructures under the to be isolated and should communicate with the outside
NFV scenario. Some authors assessed the performance of world only through layer 3 routing and filtering devices. From
Linux-based virtual switching [10, 11], while others inves- such requirements stem two major issues to be addressed
tigated network performance in public cloud services [12]. in cloud networking: (i) integration of any set of virtual
Solutions for low-latency SDN implementation on high- networks defined in the data center physical switches with the
performance cloud platforms have also been developed [13]. specific virtual network technologies adopted by the hosting
However, none of the above works specifically deals with servers and (ii) isolation among virtual networks that must
NFV scenarios on OpenStack platform. Although some be logically separated because of being dedicated to different
mechanisms for effectively placing virtual network functions purposes or different customers. Moreover these problems
within an OpenStack cloud have been presented [14], a should be solved with performance optimization in mind, for
detailed analysis of their network performance has not been instance, aiming at keeping VMs with intensive exchange of
provided yet. data colocated in the same server, keeping local traffic inside
This paper aims at providing insights on how the Open- the host and thus reducing the need for external network
Stack platform implements multitenant network virtual- resources and minimizing the communication latency.
ization, focusing in particular on the performance issues, The solution to these issues is usually fully supported
trying to fill a gap that is starting to get the attention by the VM manager (i.e., the Hypervisor) running on the
also from the OpenStack developer community [15]. The hosting servers. Layer 3 routing functions can be executed by
paper objective is to identify performance bottlenecks in the taking advantage of lightweight virtualization tools, such as
cloud implementation of the NFV paradigms. An ad hoc Linux containers or network namespaces, resulting in isolated
set of experiments were designed to evaluate the OpenStack virtual networks with dedicated network stacks (e.g., IP
performance under critical load conditions, in both single routing tables and netfilter flow states) [18]. Similarly layer
tenant and multitenant scenarios. The results reported in 2 switching is typically implemented by means of kernel-
this work extend the preliminary assessment published in level virtual bridges/switches interconnecting a VM’s virtual
[16, 17]. interface to a host’s physical interface. Moreover the VMs
The paper is structured as follows: the network virtual- placing algorithms may be designed to take networking
ization concept in cloud computing infrastructures is further issues into account thus optimizing the networking in the
elaborated in Section 2; the OpenStack virtual network cloud together with computation effectiveness [19]. Finally
architecture is illustrated in Section 3; the experimental test- it is worth mentioning that whatever network virtualization
bed that we have deployed to assess its performance is technology is adopted within a data center, it should be
presented in Section 4; the results obtained under different compatible with SDN-based implementation of the control
scenarios are discussed in Section 5; some conclusions are plane (e.g., OpenFlow) for improved manageability and
finally drawn in Section 6. programmability [20].
Journal of Electrical and Computer Engineering 3
Cloud
customers
Internet
External network
Management network
For the purposes of this work the implementation of center infrastructure with a high level of transparency. As
layer 2 connectivity in the cloud environment is of particular illustrated in Figure 1, a typical OpenStack cloud is composed
relevance. Many Hypervisors running on Linux systems of a number of physical nodes and networks:
implement the LANs inside the servers using Linux Bridge,
the native kernel bridging module [21]. This solution is (i) Controller node: managing the cloud platform.
straightforward and is natively integrated with the powerful (ii) Network node: hosting the networking services for the
Linux packet filtering and traffic conditioning kernel func- various tenants of the cloud and providing external
tions. The overall performance of this solution should be at connectivity.
a reasonable level when the system is not overloaded [22]. (iii) Compute nodes: as many hosts as needed in the cluster
The Linux Bridge basically works as a transparent bridge to execute the VMs.
with MAC learning, providing the same functionality as a
standard Ethernet switch in terms of packet forwarding. But (iv) Storage nodes: to store data and VM images.
such standard behavior is not compatible with SDN and is not (v) Management network: the physical networking infras-
flexible enough when aspects such as multitenant traffic iso- tructure used by the controller node to manage
lation, transparent VM mobility, and fine-grained forward- the OpenStack cloud services running on the other
ing programmability are critical. The Linux-based bridging nodes.
alternative is Open vSwitch (OVS), a software switching (vi) Instance/tunnel network (or data network): the phys-
facility specifically designed for virtualized environments and ical network infrastructure connecting the network
capable of reaching kernel-level performance [23]. OVS is node and the compute nodes, to deploy virtual tenant
also OpenFlow-enabled and therefore fully compatible and networks and allow inter-VM traffic exchange and
integrated with SDN solutions. VM connectivity to the cloud networking services
running in the network node.
3. OpenStack Virtual Network Infrastructure (vii) External network: the physical infrastructure enabling
connectivity outside the data center.
OpenStack provides cloud managers with a web-based dash-
board as well as a powerful and flexible Application Pro- OpenStack has a component specifically dedicated to
grammable Interface (API) to control a set of physical hosting network service management: this component, formerly
servers executing different kinds of Hypervisors (in general, known as Quantum, was renamed as Neutron in the Havana
OpenStack is designed to manage a number of computers, release. Neutron decouples the network abstractions from the
hosting application servers: these application servers can actual implementation and provides administrators and users
be executed by fully fledged VMs, lightweight containers, with a flexible interface for virtual network management.
or bare-metal hosts; in this work we focus on the most The Neutron server is centralized and typically runs in the
challenging case of application servers running on VMs) and controller node. It stores all network-related information
to manage the required storage facilities and virtual network and implements the virtual network infrastructure in a
infrastructures. The OpenStack dashboard also allows instan- distributed and coordinated way. This allows Neutron to
tiating computing and networking resources within the data transparently manage multitenant networks across multiple
4 Journal of Electrical and Computer Engineering
compute nodes and to provide transparent VM mobility to remote data centers [25]. The examples shown in Figures
within the data center. 2 and 3 refer to the case of tenant isolation implemented
Neutron’s main network abstractions are with GRE tunnels on the instance/tunnel network. Whatever
virtualization technology is used in the physical network,
(i) network, a virtual layer 2 segment; its virtual networks must be mapped into the VLANs used
(ii) subnet, a layer 3 IP address space used in a network; internally by Neutron to achieve isolation. This is performed
(iii) port, an attachment point to a network and to one or by taking advantage of the programmable features available
more subnets on that network; in OVS through the insertion of appropriate OpenFlow
mapping rules in br-int and br-tun.
(iv) router, a virtual appliance that performs routing Virtual bridges are interconnected by means of either
between subnets and address translation; virtual Ethernet (veth) pairs or patch port pairs, consisting
(v) DHCP server, a virtual appliance in charge of IP of two virtual interfaces that act as the endpoints of a pipe:
address distribution; anything entering one endpoint always comes out on the
(vi) security group, a set of filtering rules implementing a other side.
cloud-level firewall. From the networking point of view the creation of a new
VM instance involves the following steps:
A cloud customer wishing to implement a virtual infras-
tructure in the cloud is considered an OpenStack tenant and (i) The OpenStack scheduler component running in the
can use the OpenStack dashboard to instantiate computing controller node chooses the compute node that will
and networking resources, typically creating a new network host the VM.
and the necessary subnets, optionally spawning the related (ii) A tap interface is created for each VM network
DHCP servers, then starting as many VM instances as interface to connect it to the Linux kernel.
required based on a given set of available images, and speci-
fying the subnet (or subnets) to which the VM is connected. (iii) A Linux Bridge dedicated to each VM network inter-
Neutron takes care of creating a port on each specified subnet face is created (in Figure 3 two of them are shown)
(and its underlying network) and of connecting the VM to and the corresponding tap interface is attached to it.
that port, while the DHCP service on that network (resident (iv) A veth pair connecting the new Linux Bridge to the
in the network node) assigns a fixed IP address to it. Other integration bridge is created.
virtual appliances (e.g., routers providing global connectivity)
can be implemented directly in the cloud platform, by means The veth pair clearly emulates the Ethernet cable that would
of containers and network namespaces typically defined in connect the two bridges in real life. Nonetheless, why the
the network node. The different tenant networks are isolated new Linux Bridge is needed is not intuitive, as the VM’s tap
by means of VLANs and network namespaces, whereas the interface could be directly attached to br-int. In short, the
security groups protect the VMs from external attacks or reason is that the antispoofing rules currently implemented
unauthorized access. When some VM instances offer services by Neutron adopt the native Linux kernel filtering functions
that must be reachable by external users, the cloud provider (netfilter) applied to bridged tap interfaces, which work only
defines a pool of floating IP addresses on the external under Linux Bridges. Therefore, the Linux Bridge is required
network and configures the network node with VM-specific as an intermediate element to interconnect the VM to the
forwarding rules based on those floating addresses. integration bridge. The security rules are applied to the Linux
OpenStack implements the virtual network infrastruc- Bridge on the tap interface that connects the kernel-level
ture (VNI) exploiting multiple virtual bridges connecting bridge to the virtual Ethernet port of the VM running in user-
virtual and/or physical interfaces that may reside in different space.
network namespaces. To better understand such a complex
system, a graphical tool was developed to display all the 4. Experimental Setup
network elements used by OpenStack [24]. Two examples,
showing the internal state of a network node connected to The previous section makes the complexity of the OpenStack
three virtual subnets and a compute node running two VMs, virtual network infrastructure clear. To understand optimal
are displayed in Figures 2 and 3, respectively. design strategies in terms of network performance it is of
Each node runs OVS-based integration bridge named great importance to analyze it under critical traffic conditions
br-int and, connected to it, an additional OVS bridge for and assess the maximum sustainable packet rate under
each data center physical network attached to the node. different application scenarios. The goal is to isolate as much
So the network node (Figure 2) includes br-tun for the as possible the level of performance of the main OpenStack
instance/tunnel network and br-ex for the external network. network components and determine where the bottlenecks
A compute node (Figure 3) includes br-tun only. are located, speculating on possible improvements. To this
Layer 2 virtualization and multitenant isolation on the purpose, a test-bed including a controller node, one or two
physical network can be implemented using either VLANs compute nodes (depending on the specific experiment), and
or layer 2-in-layer 3/4 tunneling solutions, such as Virtual a network node was deployed and used to obtain the results
eXtensible LAN (VXLAN) or Generic Routing Encapsulation presented in the following. In the test-bed each compute node
(GRE), which allow extending the local virtual networks also runs KVM, the native Linux VM Hypervisor, and is equipped
Journal of Electrical and Computer Engineering 5
Physical interface
External network
OVS bridge
eth0
l2tp tunnel
VLAN alias
External router interface qg-9326d793-0f br-ex(bridge) br-ex
Linux Bridge
TUN/TAP
phy-br-ex OVS-internal
Subnet 1 router interface qr-dcaace0c-ab
GRE tunnel
int-br-ex
Subnet 1 DHCP server tapf9c1bdb7-55
Patch port
veth pair
br-int(bridge) br-int
LinBr mgmgt iface
Subnet 2 router interface qr-6df34d1e-10
Other OVS ports
Subnet 2 DHCP server tapf2027a28-f0 patch-tun
patch-int
gre-0a7d0001
eth2 eth1
Management Instance/tunnel
network network
Figure 2: Network elements in an OpenStack network node connected to three virtual subnets. Three OVS bridges (red boxes) are
interconnected by patch port pairs (orange boxes). br-ex is directly attached to the external network physical interface (eth0), whereas GRE
tunnel is established on the instance/tunnel network physical interface (eth1) to connect br-tun with its counterpart in the compute node. A
number of br-int ports (light-green boxes) are connected to four virtual router interfaces and three DHCP servers. An additional physical
interface (eth2) connects the network node to the management network.
with 8 GB of RAM and a quad-core processor enabled to In some cases, the Iperf3 tool was also added to generate
hyperthreading, resulting in 8 virtual CPUs. background traffic at a fixed data rate [27]. All physical
The test-bed was configured to implement three possible interfaces involved in the experiments were Gigabit Ethernet
use cases: network cards.
(1) A typical single tenant cloud computing scenario.
(2) A multitenant NFV scenario with dedicated network 4.1. Single Tenant Cloud Computing Scenario. This is the
functions. typical configuration where a single tenant runs one or
(3) A multitenant NFV scenario with shared network multiple VMs that exchange traffic with one another in
functions. the cloud or with an external host, as shown in Figure 4.
This is a rather trivial case of limited general interest but
For each use case multiple experiments were executed as is useful to assess some basic concepts and pave the way
reported in the following. In the various experiments typ- to the deeper analysis developed in the second part of this
ically a traffic source sends packets at increasing rate to section. In the experiments reported, as mentioned above, the
a destination that measures the received packet rate and virtualization Hypervisor was always KVM. A scenario with
throughput. To this purpose the RUDE & CRUDE tool was OpenStack running the cloud environment and a scenario
used, for both traffic generation and measurement [26]. without OpenStack were considered to assess some general
6 Journal of Electrical and Computer Engineering
tap03b80be1-55 tapfe856de0-44
qvb03b80be1-55 qvbfe856de0-44
qvo03b80be1-55 qvofe856de0-44
br-int(bridge) br-int
patch-tun
patch-int
br-tun(bridge) br-tun
gre-0a7d0002
eth3 eth0
Management Instance/tunnel
network network
Figure 3: Network elements in an OpenStack compute node running two VMs. Two Linux Bridges (blue boxes) are attached to the VM tap
interfaces (green boxes) and connected by virtual Ethernet pairs (light-blue boxes) to br-int.
comparison and allow a first isolation of the performance (2) Non-OpenStack scenario: it adopts physical hosts
degradation due to the individual building blocks, in par- running Linux-Ubuntu server and KVM Hypervisor,
ticular Linux Bridge and OVS. The experiments report the using either OVS or Linux Bridge as a virtual switch.
following cases: The following setups were tested:
(1) OpenStack scenario: it adopts the standard OpenStack
cloud platform, as described in the previous section, (2.1) One physical host executing two colocated
with two VMs, respectively, acting as sender and VMs, acting as sender and receiver and directly
receiver. In particular, the following setups were connected to the same Linux Bridge.
tested: (2.2) The same setup as the previous one, but with
(1.1) A single compute node executing two colocated OVS bridge instead of a Linux Bridge.
VMs. (2.3) Two physical hosts: one executing the sender
(1.2) Two distinct compute nodes, each executing a VM connected to an internal OVS and the other
VM. natively acting as the receiver.
Journal of Electrical and Computer Engineering 7
10.0.3.0/24
DPInet3
10.1.4.0/24
InVMnet4
10.0.6.0/24
DPInet6
10.1.5.0/24
InVMnet5
10.0.5.0/24
DPInet5
10.1.3.0/24
InVMnet3
10.1.6.0/24
InVMnet6
10.0.4.0/24
DPInet4
10.250.0.0/24
pub
Figure 6: The OpenStack dashboard shows the tenants virtual networks (slices). Each slice includes VM connected to an internal network
(InVMnet𝑖) and a second VM performing DPI and packet forwarding between InVMnet𝑖 and DPInet𝑖. Connectivity with the public Internet
is provided for all by the virtual router in the bottom-left corner.
virtual network path inside the compute node for the VNF same hardware configuration used in the cluster of the cloud
chaining of Figure 7(d) is displayed in Figure 8. The peculiar platform.
nature of NFV traffic flows is clearly shown in the figure, The former host acts as traffic generator while the latter
where packets are being forwarded multiple times across br- acts as traffic sink. The aim is to verify and assess the
int as they enter and exit the multiple VNFs running in the maximum throughput and sustainable packet rate of the
compute node. hardware platform used for the experiments. Packet flows
ranging from 103 to 105 packets per second (pps), for both
64- and 1500-byte IP packet sizes, were generated.
5. Numerical Results For 1500-byte packets, the throughput saturates to about
970 Mbps at 80 Kpps. Given that the measurement does not
5.1. Benchmark Performance. Before presenting and dis- consider the Ethernet overhead, this limit is clearly very close
cussing the performance of the study scenarios described to the 1 Gbps which is the physical limit of the Ethernet
above, it is important to set some benchmark as a reference interface. For 64-byte packets, the results are different since
for comparison. This was done by considering a back-to- the maximum measured throughput is about 150 Mbps.
back (B2B) connection between two physical hosts, with the Therefore the limiting factor is not the Ethernet bandwidth
Journal of Electrical and Computer Engineering 9
Tenant N Tenant N
··· ···
Tenant 2 Tenant 2
Tenant 1 Tenant 1
DPI Virtual
Customer VM Virtual Customer VM
router router
Tenant N
···
Tenant 2
Tenant 1
Tenant N
···
Tenant 2
Tenant 1
Figure 7: Multitenant NFV scenario with shared network functions tested on the OpenStack platform.
but the maximum sustainable packet processing rate of the setups (1.1) and (1.2) with the B2B case. The figure shows
computer node. These results are shown in Figure 9. that the different networking configurations play a crucial
This latter limitation, related to the processing capabilities role in performance. Setup (1.1) with the two VMs colocated
of the hosts, is not very relevant to the scopes of this work. in the same compute node clearly is more demanding since
Indeed it is always possible, in a real operation environment, the compute node has to process the workload of all the
to deploy more powerful and better dimensioned hardware. components shown in Figure 3, that is, packet generation and
This was not possible in this set of experiments where the reception in two VMs and layer 2 switching in two Linux
cloud cluster was an existing research infrastructure which Bridges and two OVS bridges (as a matter of fact the packets
could not be modified at will. Nonetheless the objective are both outgoing and incoming at the same time within the
here is to understand the limitations that emerge as a same physical machine). The performance starts deviating
consequence of the networking architecture, resulting from from the B2B case at around 20 Kpps, with a saturating effect
the deployment of the VNFs in the cloud, and not of the starting at 30 Kpps. This is the maximum packet processing
specific hardware configuration. For these reasons as well as capability of the compute node, regardless of the physical
for the sake of brevity, the numerical results presented in the networking capacity, which is not fully exploited in this
following mostly focus on the case of 1500-byte packet length, particular scenario where the traffic flow does not leave the
which will stress the network more than the hosts in terms of physical host. Setup (1.2) splits the workload over two phys-
performance. ical machines and the benefit is evident. The performance is
almost ideal, with a very little penalty due to the virtualization
5.2. Single Tenant Cloud Computing Scenario. The first series overhead.
of results is related to the single tenant scenario described in These very simple experiments lead to an important
Section 4.1. Figure 10 shows the comparison of OpenStack conclusion that motivates the more complex experiments
10 Journal of Electrical and Computer Engineering
br-int(bridge)
br-int
patch-tun
patch-int
br-tun
br-tun(bridge)
gre-0a7d0005gre-0a7d0002gre-0a7d0004
eth3 eth0
Figure 8: A view of the OpenStack compute node with the tenant VM and the VNFs installed including the building blocks of the virtual
network infrastructure. The red dashed line shows the path followed by the packets traversing the VNF chain displayed in Figure 7(d).
1000 80
70
800
Throughput received (Mbps)
60
Traffic received (Kpps)
50
600
40
400 30
20
200
10
0
0 0 10 20 30 40 50 60 70 80 90 100
0 10 20 30 40 50 60 70 80 90 100
Traffic generated (Kpps)
Traffic generated (Kpps)
B2B
Ideal, 1500 bytes
2 VMs in 2 compute nodes
B2B, 1500 bytes
2 VMs in 1 compute node
B2B, 64 bytes
Figure 10: Received versus generated packet rate in the OpenStack
Figure 9: Throughput versus generated packet rate in the B2B setup scenario setups (1.1) and (1.2), with 1500-byte packets.
for 64- and 1500-byte packets. Comparison with ideal 1500-byte
packet throughput.
80 80
70 70
60 60
Traffic received (Kpps)
40 40
30 30
20 20
10 10
0 0
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Traffic generated (Kpps) Traffic generated (Kpps)
B2B B2B
2 VMs with OVS OVS with sender VM only
2 VMs with LB
Figure 12: Received versus generated packet rate in the Non-
Figure 11: Received versus generated packet rate in the Non- OpenStack scenario setup (2.3), with 1500-byte packets.
OpenStack scenario setups (2.1) and (2.2), with 1500-byte packets.
100
hosts in the equation) is always a cause of perfor- 90
Traffic received per tenant (Kpps)
100 100
90 90
80 80
70 70
60 60
50 50
40 40
30 30
20 20
10 10
0 0
0 10 20 30 40 50 60 70 80 90 100 0 50 100 150 200 250 300 350 400
Traffic generated per tenant (Kpps) Total traffic generated (Kpps)
50
50 900
800
40
600
30
500
400
20
300
10 200
100
0 0
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Traffic generated (Kpps) Traffic generated (Kpps)
T1-VR-DEST T1-DPI-FW-VR-DEST T1-VR-DEST single tenant
T1-DPI-VR-DEST T1-DPI-FW-TS-VR-DEST T1-VR-DEST
Figure 17: Received versus generated packet rate for one tenant T2-DPI-FW-TS-VR-DEST
(T1) when four tenants are active, with 64-byte IP packet size and T3-DPI-FW-TS-VR-DEST
different levels of VNF chaining as per Figure 7. DPI: deep packet T4-DPI-FW-TS-VR-DEST
inspection; FW: firewall/NAT; TS: traffic shaper; VR: virtual router; Figure 18: Received throughput versus generated packet rate for
DEST: destination. each tenant (T1, T2, T3, and T4) when T1 does not traverse the VNF
chain of Figure 7(d), with 1500-byte IP packet size. Comparison
with the single tenant case. DPI: deep packet inspection; FW: fire-
wall/NAT; TS: traffic shaper; VR: virtual router; DEST: destination.
60
does not significantly impact the forwarding performance
of the OpenStack compute node for a packet rate below
50 Kpps (note that the physical capacity is saturated by the 50
Throughput received (Mbps)
[8] K. Pretz, “Software already defines our lives—but the impact of [24] G. Santandrea, Show My Network State, 2014, https://sites
SDN will go beyond networking alone,” IEEE. The Institute, vol. .google.com/site/showmynetworkstate.
38, no. 4, p. 8, 2014. [25] R. Jain and S. Paul, “Network virtualization and software
[9] OpenStack Project, http://www.openstack.org. defined networking for cloud computing: a survey,” IEEE
[10] F. Sans and E. Gamess, “Analytical performance evaluation of Communications Magazine, vol. 51, no. 11, pp. 24–31, 2013.
different switch solutions,” Journal of Computer Networks and [26] “RUDE & CRUDE: Real-Time UDP Data Emitter & Collector
Communications, vol. 2013, Article ID 953797, 11 pages, 2013. for RUDE,” http://sourceforge.net/projects/rude/.
[11] P. Emmerich, D. Raumer, F. Wohlfart, and G. Carle, “Perfor- [27] iperf3: a TCP, UDP, and SCTP network bandwidth measure-
mance characteristics of virtual switching,” in Proceedings of the ment tool, https://github.com/esnet/iperf.
3rd International Conference on Cloud Networking (CloudNet [28] nDPI: Open and Extensible LGPLv3 Deep Packet Inspection
’13), pp. 120–125, IEEE, Luxembourg City, Luxembourg, Octo- Library, http://www.ntop.org/products/ndpi/.
ber 2014.
[12] R. Shea, F. Wang, H. Wang, and J. Liu, “A deep investigation
into network performance in virtual machine based cloud
environments,” in Proceedings of the 33rd IEEE Conference on
Computer Communications (INFOCOM ’14), pp. 1285–1293,
IEEE, Ontario, Canada, May 2014.
[13] P. Rad, R. V. Boppana, P. Lama, G. Berman, and M. Jamshidi,
“Low-latency software defined network for high performance
clouds,” in Proceedings of the 10th System of Systems Engineering
Conference (SoSE ’15), pp. 486–491, San Antonio, Tex , USA,
May 2015.
[14] S. Oechsner and A. Ripke, “Flexible support of VNF place-
ment functions in OpenStack,” in Proceedings of the 1st IEEE
Conference on Network Softwarization (NETSOFT ’15), pp. 1–6,
London, UK, April 2015.
[15] G. Almasi, M. Banikazemi, B. Karacali, M. Silva, and J. Tracey,
“Openstack networking: it’s time to talk performance,” in
Proceedings of the OpenStack Summit, Vancouver, Canada, May
2015.
[16] F. Callegati, W. Cerroni, C. Contoli, and G. Santandrea,
“Performance of network virtualization in cloud computing
infrastructures: the Openstack case,” in Proceedings of the 3rd
IEEE International Conference on Cloud Networking (CloudNet
’14), pp. 132–137, Luxemburg City, Luxemburg, October 2014.
[17] F. Callegati, W. Cerroni, C. Contoli, and G. Santandrea, “Per-
formance of multi-tenant virtual networks in OpenStack-based
cloud infrastructures,” in Proceedings of the 2nd IEEE Work-
shop on Cloud Computing Systems, Networks, and Applications
(CCSNA ’14), in Conjunction with IEEE Globecom 2014, pp. 81–
85, Austin, Tex, USA, December 2014.
[18] N. Handigol, B. Heller, V. Jeyakumar, B. Lantz, and N. McK-
eown, “Reproducible network experiments using container-
based emulation,” in Proceedings of the 8th International Con-
ference on Emerging Networking Experiments and Technologies
(CoNEXT ’12), pp. 253–264, ACM, December 2012.
[19] P. Bellavista, F. Callegati, W. Cerroni et al., “Virtual network
function embedding in real cloud environments,” Computer
Networks, vol. 93, part 3, pp. 506–517, 2015.
[20] M. F. Bari, R. Boutaba, R. Esteves et al., “Data center network
virtualization: a survey,” IEEE Communications Surveys &
Tutorials, vol. 15, no. 2, pp. 909–928, 2013.
[21] The Linux Foundation, Linux Bridge, The Linux Foundation,
2009, http://www.linuxfoundation.org/collaborate/workgroups/
networking/bridge.
[22] J. T. Yu, “Performance evaluation of Linux bridge,” in Proceed-
ings of the Telecommunications System Management Conference,
Louisville, Ky, USA, April 2004.
[23] B. Pfaff, J. Pettit, T. Koponen, K. Amidon, M. Casado, and S.
Shenker, “Extending networking into the virtualization layer,” in
Proceedings of the 8th ACM Workshop on Hot Topics in Networks
(HotNets ’09), New York, NY, USA, October 2009.
International Journal of
Rotating
Machinery
International Journal of
The Scientific
Engineering Distributed
Journal of
Journal of
Journal of
Control Science
and Engineering
Advances in
Civil Engineering
Hindawi Publishing Corporation Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014
Journal of
Journal of Electrical and Computer
Robotics
Hindawi Publishing Corporation
Engineering
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014
VLSI Design
Advances in
OptoElectronics
International Journal of
International Journal of
Modelling &
Simulation
Aerospace
Hindawi Publishing Corporation Volume 2014
Navigation and
Observation
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Engineering
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
http://www.hindawi.com Volume 2014
International Journal of
International Journal of Antennas and Active and Passive Advances in
Chemical Engineering Propagation Electronic Components Shock and Vibration Acoustics and Vibration
Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014