The concept of Network Function Virtualization (NFV) discovered to accelerate deployment of new network services and to support their revenue and future growth objectives. In this modern era, the demand of cloud services are continuously increasing, Hence, Network Function Virtualization (NFV) is becoming more popular among the application service providers, Internet service providers and cloud service providers. A NFV consists several Virtual Network Functions (VNFs) which are responsible for allowing virtualized network services to run on open computing platforms. These functions are very effective to increase the network scalability and agility and also enable a better use of network resources. On the other hand, Network Function Virtualization (NFV) also delivers an effective and flexible alternative for the service deployments across multiple-clouds. NFV faces a new challenge at the time of deployment and also in case of knowing how to migrate functions from one server to another for load balancing, cost reduction and energy saving (Cho et al., 2017).
In this research paper, we are going to discuss the related work of several researchers on the field of placement of VNFs and their corresponding purpose. These all researches have few limitations and gaps. In order to remove these limitation and fill these gaps, we will also propose a comprehensive model for placement of VNFs. Further, we will design, develop, and deploy the model in methodology in order to understand the efficiently weather this approach is suitable for minimizing the network latency and lead time. In result section, we will briefly discuss the outcomes of using this well-developed approach.
In today’s world the demand of the cloud services are continually increasing. So, each IT organization must design a flexible and efficient mechanisms for proper placement and chaining of virtual network functions (VNFs). Network Function Virtualization (NFV) is a combination virtual network functions that popularity among the application service providers, Internet service providers and cloud service providers. This research is going to critically review the concept of designing the most effective and useful Network Function Virtualization (NFV) in order to minimize network latency and lead time. In this research, we will also propose a more comprehensive model based on real measurements to find out network latency and optimize placement of VNFs in CDCs (cloud data centres).
The experimental data and results will show that VNF Low-Latency Placement i.e. VNF-LLP can reduce network latency effectively with considering different generic algorithms. In addition, it will also take a lower lead time in order to host a VNF.
Several years ago, it was difficult to add a new service due to the high cost and system stability. For solving this issue, an emerging network architecture, NFV is represented that can increase flexibility and agility within operator’s networks with making a replacement of virtualized services on demand in Cloud data centers (CDCs). A NFV is a new way to design, deploy, and manage networking services with deployment of the physical network equipment from the functions that run on them, which substitute hardware centric, dedicated network devices with software running on general-purpose CPUs or virtual machines, operating on standard servers. This virtualized model will help to reduce Operating Expenditures and Capital Expenditures for network service with delivering virtualizing Network Functions such as firewalls, load balancers, intrusion detection devices WAN accelerators and transcoder. The concept of NFV is completely new and it is based on technologies.
A NFV is useful to change the way and the network operators architect networks through evolution of standard IT virtualization technology in order to consolidate equipment types onto industry standard all hardware equipment which are located in Data centers. In this research project, we are going to design a more comprehensive model depending on real measurements for capturing network latency in order to optimize placement of VNFs in CDCs. From past few years, NFV becomes a major catalyst for the transformational changes in the network. The basic applications of Network Functions Virtualization effectively brings several benefits to network operators. Here are few benefits are mentioned;
- These functions can reduce the cost of equipment and also reduce the power consumption with the presence of consolidating equipment. Because the cost efficiency is playing a major role in case of NFV.
- The NFV allows system in order to abstract underlying hardware and also enables elasticity, scalability and automation. It also enhance the flexibility of provided network services and delay the time of new services.
- It also minimize the typical network operator cycle of innovation in terms of increasing the speed of deployment. This model makes modes of feature evolution more feasible. Another concept of Network Functions Virtualization also allows network operators to meaningfully decrease the maturation cycle.
- As the requirement is continuously changeable, the speed of service deployment can be handled by provisioning remotely in software.
- In case of application, these model will enable a wide variety of eco-systems and encourage openness. It will have a virtual appliance market in terms of software entrance small players and academia and influence more innovation with adopting new virtual services.
- It also reduces the energy consumption though exploitation of power management features for standard servers and storage. It is also applicable for workload consolidation and location optimization. If we rely on a virtualize technique, this technique may allow to concentrate on a smaller number of servers at off peak hours and all other servers will go to the energy saving mode.
First of all, we have to know the basic meaning of network latency. So, latency is a delay or the presence of the interruption in a connection. Latency can be changed depending on several factors such as distance, weather, the material used and hardware configurations of hosting devices and users. The generation of total latencies in a connection of a network can be the greatest factor for determining the efficiency of the network. If the latency is come under a certain point, the working ability of the whole network can be finished. Latency is directly connected with the operational efficiency. It enhances the efficiency with considering the advantage of the high uniformity of physical network platform or other supportive platform. This paper is going to address the research question in order to minimize the total latency of a network when algorithm is used to place VNFs in VMs. In this paper, we will discuss three basic placement algorithms VNF Low-Latency Placement Algorithm (VNF-LLP), VNF Best-Fit Placement Algorithm (VNF-BFP), and VNF First-Fit Placement Algorithm (VNF-FFP). We will do compare between these three placement algorithm and will select the best suitable one which will satisfy our proposed objectives.
The aim of this research is to design a comprehensive model of Network Function Virtualization (NFV) with containing several Virtual Network Functions (VNF) depending on real measurements. This model will help VNF Low-Latency Placement (VNF-LLP) to reduce network latency. So, the optimized placement of VNFs in CDCs will very effective to minimize the network latency and lead time. As we are using real measurement, we will develop an algorithm to minimize network latency and lead time in order to allocate VNFs to VMs. Here we are going to mention our main objectives of this research project –
The first one is associated with the placement of VNF with containing three different constrains such as
- The presence of network latency in a VNF service chain
- The fact of resource capacity which is present in virtual machine (VMs)
- And the demand of resources of VNFs.
- The second one is associated with the development of VNF placement algorithms which will help to reduce the network latency and also the lead time for network sensitive environments.
Network latency between different network functions of a system can be encountered by considering three different algorithms. We are going discuss three different scenarios in order to find out network latency and lead time for each case. In this objective section we are going to show three different scenario as placement cases. When network function1 (f1) and network function2 (f2) are placed on different virtual machines but in the same physical machine, the network latency measurement is recorded. The second case is associated with the placement of f1 and f2 on different physical machines and the recorded data of network latency. Finally, the third one is associated with the placement of f1 and f2 in same physical switch. In this case, two PMs are not connected with the same physical switch.
The research work is presented in five chapters. Introduction section introduces research, research objectives and scope of the research. Chapter 2 literature review examines technologies related to research and limitation of applying those technologies. Chapter 3 provides the methodologies used in research. Chapter 4 describes how the experiment is designed for research and result of experiment. Chapter 5 summaries the research work and makes suggestions for future work.
In this section, we are going to discuss how Network Functions Virtualisation (NFV) become achievable with the presence of several technology developments. We will also examine different techniques used and developed in worldwide and gaps or limitations in applying those techniques. The proper placement of VNFs in a network infrastructure is main component in for designing this model. Several approaches related to VNF are already proposed with also considering VNF placement. Several researcher focused on different approaches in order to perform VNF placement. They have chosen greedy approach, heuristic approach with considering different algorithms (linear programming algorithm, or first-fit algorithm). In this literature review, we are going to discuss several related work that are associated with VNF placement for accomplishing different purpose. But each study has several limitation and gaps for accomplishing such objectives.
In terms of resource allocation, Cao et al., already presented a formal VNF-Placement (VNF-P) in terms of hybrid network environments. In this model, all network function are able to be allocated on both physical hardware and virtualized instances. The main objective of their represented model was associated with the minimization of the number of used cores and used servers. They effectively used with considering two different types of service chains (2016).
Gaps or limitations
- Their developed algorithm was well-fitted for only small network providers with having less execution time.
- As this model was not effective for used in large infrastructure, this model is not considered as suitable for IT industries.
Cziva et al., represented one approach in order to support emerging use cases including in-network image processing for introducing virtualized network functions (VNFs) at the edge of the network. This approach is found as more appropriate in order to reduce the network latency and unwanted utilization in the core network part. Aforementioned all literature works deal with the placement of VNFs, but this study mainly focused on the proper utilization of allocated resources (such as the number of required servers for performing the setup of VNFs). They formulated vNF placement issues in order to reduce the latency. They focuses on a dynamic reschedule approach for the optimal placement of VNFs depending on temporal network-wide latency using optimal stopping theory. With making a comparison with ISP latency, it is founded that the represented dynamic reschedule approach reduces VNF migrations comparing to other reschedules. It also delivered quality services of virtual functions without violating maximum latency in case of certain applications (2018).
Gaps or limitations
Although this approach mainly focuses on the proper utilization of resource allocation, it does not deal with reducing the network latency and lead time.
Further, an Integer Linear Programing model was proposed. This model was deal with s the network function placement and chaining problem in order to search the least chaining length between endpoints and the network function. The, they also proposed empirical approach which was more efficient and suitable for supporting large infrastructure (Li & Quian, 2016).
Gaps or limitations
For accomplishing the desired work, each model should focus on time constrain. But this approach did not focus on time constrains.
So, this approach was also rejected as it was not appropriate for finding the optimal solution.
Then Ashrafi came to introduce an alternative efficient heuristic algorithm in order to reduce optical conversions by avoiding unwanted flow traversals in domain and cantering network function in case of same network chain. This approach is related with the first-hit algorithm. This algorithm is associated with the number of network function chains that increases in terms of reducing optical/electrical/optical conversion. Although this research was based on optics, it can also consider for NVFs placement (2018).
Gaps or limitations
But this approach did not focus on network resources including network capacity of pods and network demand of network functions.
Then an orchestrator based architecture was introduced which consists automatic allocation and placement of the virtual routers (Taleb et al., 2017). The orchestrator helps to create and remove the virtual routers and applications configure in case of managing and maintaining (Benkacem et al., 2018). In this approach, three different algorithms were considered in order to select the appropriate place for placing an advanced virtual router. Basically, VNF placement is used in this research of Agarwal’s paper in terms of a main aspect of vertical’s requirement (system resources). This study is very effective for reducing the solution complexity (2019).
Gaps or limitations
Due to the usage of several placement strategies of virtual routers, the experimental research consists limited result or outcomes.
Xia et al., also introduced the chaining of network functions in terms of free language. They used limited network resources and they faced optimization related issues. Basically, they focused on optical technology in order to perform network functions chaining. With the enabling of network function virtualization, VNFs are placed at the right places (2015). This study mainly deal with the operating costs of NVF chaining. Further, he introduced Binary integer programming (BIP) in order to propose an alternative effective heuristic algorithm for stopping the unnecessary traversals of traffic flows. They also represented the first-fit algorithm to demonstrate the effectiveness of BIP in different scenarios.
Gaps or limitations
Although, this model only focused on network resources and did not focus on CPU resource constraint.
Patel & Vutukuru focused on the placement of VNFs in LTE packet core. The main purpose of this study was associated with reducing the latency of server users and also reducing the operational costs of the network. The mobility was another important aspect of this paper. They are successfully reduced up to 60% in average time considered by the server handover requests. They mainly discussed about mobility aware VNF placement problem and also proposed a suitable approach. Their represented improved model effectively reduces the average latency without considering the operational costs (2017).
Gaps or limitations
This approach does not focus on allocated resources. Hence, this approach is not considered as suitable for implementing in any large IT industry.
Ali et al., introduced several virtualization techniques in their works in order to change the way of network operator deploy and control the internet services. They focus on saving operational costs and capital at the time of placing VNFs. Basically, their model consists several VNFs functions which were connected in one chain in sequential order. This chain is responsible to deliver internet services to users. They further introduced three integer linear programming for proposing VNFs placement with VNF service chaining and also evaluated the impact of latency and lead time. They focused on the most challenging task which was associated with latency constrain. This architectural paradigm helped them in order to improve the flexibility of the network services and reduce the lead time.
Gaps or limitations
As they adopted several hardware appliances, the cost became high and maintenance became harder of these integrating services.
Finally, Yala, Frangoudis, & Kestini came and proposed an optimal model in order to place VNFs in multi-cloud environment, he also introduced an advanced heuristic approach for placing VNF dynamically. He focused on network capacity ad considered it as a new parameter in terms of reducing the total response time i.e. latency of a system. This proposed heuristic approach is much better than other introduced model. Generally, this model was proposed to serve for the telecom company, it is most appropriate for its static behaviour on network services. It also deals with the demand of the deployment of services which are directly connected with physical resources. This approach is also called as data-centric model. Because in this model, the computational resources are involved to move towards end users. So, in order to meet the users’ demand, a network paradigm i.e., a NFV is proposed which directly reduces the operational costs and capital costs with the perfect implementation of network function into a software layer. Hence, this study clearly shows that this heuristic approach is more flexible than greedy approaches which are effectively used in the VM placement problem (2018).
Gaps or limitations
But this approach does not consists detailed network latency among several elements.
This approach also did not do detailed analysis of allocated resources.
After considering all these efforts, we are going to design the best suitable VNF approach which will mainly focus on the VNFs placement in order to minimize network latency of a system and also the lead time (Cho, 2017). Basically, we are going to consider all recommendation of aforementioned researches and develop a new scheme with focusing on resource constrain using a VNF placement algorithm.
Our proposed model is going to focus on minimizing the network latency for network sensitive applications. The designed model must be very efficient to reduce network latency and also the lead time with considering the resource allocation constrains (Sahoo et al., 2017). Before designing the model, we have to consider three basic algorithms VNF-FFP, VNF-LLP, and VNF-BFP satisfy our research objectives. Each algorithm has individual benefits which will briefly discuss on this research project. Basically, VNF-FFP and VNF-BFP are generic algorithms which only focus on resource allocation and indirectly increase the network latency and lead time. We are going to prove this statement through experiment. On the other hand VNF-LLP is much better than both these algorithm as this algorithm directly deal with reduction in network latency and lead time. So, our proposed model can be adopted in order to optimize network latency for network sensitive applications.
The usage of VNFs in a VNF placement is very much adoptive for different IT field. Several technologies are associated with VNFs placement and they have to deal with network latency and the lead time issues. Majority of existing technology developments are used in cloud computing, high volume servers, and software defined networking (SDN). The virtualization of network services such as firewalls, load balancers, intrusion detection devices, WAN accelerators and transcoder play a vital role for designing our desired model. Following is the summary of these approaches are discussed.
Network Functions Virtualization influences the modern technologies which are involved in cloud computing. Basically, these technologies are used in virtualization mechanisms in terms of hardware virtualization. The cloud technology also contains virtual Ethernet switches (e.g. v switch) in order to connect traffic between virtual machines and physical interfaces. Generally, Cloud infrastructures are responsible to deliver methods in order to enhance resource availability and usage through managing management functions of virtual appliances in the network. In addition, the availability of open APIs for management and data plane control such as OpenFlow, OpenStack, OpenNaaS can effectively deliver an additional degree of combination of Network Functions Virtualization and cloud computing (Gupta et al., 2019).
The basic use of Standard High Volume Servers plays a vital role in Network Functions Virtualisation. An industry with having standard high volume server helps to build standardised IT components such as x86 architecture. The most important and basic feature of standard high volume servers is associated with the availability of competitive supply of the subcomponents which have interchangeable natures.
The presence of Software Defined Networking (SDN) can make easier the implementation and management of NFV. A network-wide software platform allows SDN to manage network complexity in order to enable integrated network management, control and programmability. SDN also offers a programmable and customizable interface for the purpose of controlling the functions of collecting devices with the presence of different levels of abstraction. With the presence of such technique, users can easily reconfigure the network in terms of plumbing in a function running on a server with the presence of few software methodologies. If SDN is not present in NFV, NFV will need more manual intervention in order to configure the network for appropriately plumbing in software instantiated functions.
In this research project, we are going to design a comprehensive model with considering the best suitable algorithm which will reduce e the total network latency among network functions. The main constrain is associated with network functions and the proper utilization of algorithm in order to reduce the lead time of functions. As we know, for working properly or efficiently each network must have less the latency. Because it helps the network to make better connection within VMs. From the literate review part, we have found that each work is showing several limitations and gaps related to reduction in network latency. Because most of these works are done on the basis of only the resource allocation. We have also seen that there were not any single algorithm or model which can individually deal with breaking down network latency. So, in order to solve this issue, we are going to design a comprehensive model which will deal with CPU capacity and lead time and also will be able to find out nodes at which network functions will be placed.
This paper proposes the asset designation issue to limit the backhaul transport inertness by mapping all administration chains of predefined cuts into the fitting substrate arrange assets. To decrease the idleness for administration requests, while guaranteeing the CPU cost isn’t extremely high, the inactive state term for each VNF ought to vary from one VNF to the next and be balanced powerfully as indicated by its fame. In particular, an increasingly well-known VNF ought to have a more drawn out inactive state span contrasted with those that are only from time to time visited. In this examination we propose to ascertain the inactive state span as in Eq. (2): where lf means the inert state span for VNF f, tbase indicates the underlying fundamental length of the inactive state, and tscale signifies the size of the inactive state term, vf means the seasons of VNF f is mentioned in the system, and vtotal signifies the all out occasions that all VNFs are mentioned. On the off chance that a VNF f is required much of the time, which means it has a higher ubiquity, the estimation of vf will increase and therefore it has a more drawn out inert state span.
The issue can be defined as an ILP with straight imperatives, subject to the particular administration prerequisites and system limit and transmission capacity. The contributions to the asset distribution stage are cut traffic, cut dormancy edges, arrange limit, and substrate interface data transfer capacity and length. The yield is the ideal organization way for simultaneous cut demands that limits the vehicle arrange inertness while meeting the auxiliary attributes. In such manner, the streamlining of dormancy has two contemplations (Huang et al., 2016). System limit is right off the bat considered for depicting the most extreme transmission rate that can be accommodated sending traffic, which likewise assumes a basic job for system load-adjusting. What’s more, we additionally consider the engendering separation of sent traffic as far as the area of substrate hubs and substrate connect data transmission.
System inertness between various segments of a framework is first demonstrated and after that formalized in this segment. There are three conceivable situation cases for system works in our model. The principal case is when f1 and f2 are put on various VMs yet in a similar PM. The second case is when nf1 and nf2 are put on various PMs. The third case is when two PMs are not associated with the equivalent physical switch. Four advancement targets are characterized to put a given administration chain. The outcome is chosen from the consequence of the above procedure that speaks to the summation of lead time to look for the best hub for a particular system work.
The principal imperative is to ensure that the total of CPU requests by all system capacities does not surpass the CPU limit of related VMs. The second limitation is to ensure that the total of system transfer speed required by all system capacities does not surpass the measure of accessible system limit of related VMs.
One approach to scale virtual machines is vertical (scaling up) that enable us to include pretty much physical assets (CPU/Memory) to a current virtual machine. Along these lines of scaling enables us to resize the virtual machine by changing CPU or memory. Normally, vertical scaling expects personal time to include new assets and has deﬁned confines by equipment. Then again, flat (scaling out) enable us to include pretty much virtual elements to fill in as a solitary consistent unit to adjust to network’s heap changes. In light of asset requests we can powerfully include or lessen the quantity of VMs.
Customarily, lining hypothesis is utilized to display servers and web switches, to gauge various measurements and improve arrange execution. In this work we furnish an amalgamation of queueing hypothesis with whole number programming so as to advance the general postponement. To this end, we use two lining models strategies; the M/M/1 which is utilized to show connection lines, though server’s lines in the edge mists redesigned utilizing the M/M/m model. The distinction between the two models is that with the M/M/m we expect that there are accessible assets to run VNFs (i.e., mVMs) in the framework that are autonomous. Also, to the M/M/1 model, entries and server’s administration time pursues an exponential dissemination with and parameters individually (Luizelli, Raz, & Sa’ar, 2018). As deﬁned in the lining hypothesis, the deferral is detailed dependent on the deﬁnitions of the normal handling time, the landing rate, its proportion, the likelihood of a client holding up in the line or none, and the normal holding up time.
Afﬁnity and anti-afﬁnity manages in NFV must be considered and added cautiously so as to diminish correspondence costs between VNFs occasions, guarantee high accessibility, strength, security and administration execution . In this unique circumstance, two principle perspectives ought to be considered: demonstrating and depicting the afﬁnity runs and adjusting the arrangement calculation to regard the imperatives. Contingent upon the utilization case there may be occurrences where we have to put a couple of VNFs on a similar edge-cloud (e.g., VNFs trading a major measure of information). For this situation, we ought to deﬁne afﬁnity limitations to put the at least two VNFs in a similar host. In different cases, against afﬁnity standards are considered to permit basic VNFs to keep running on various hubs (e.g., on account of disappointment, it will be smarter to have various cases of a similar capacity set on various edge mists or diverse physical servers in a similar edge cloud). Against afﬁnity principles guarantee the base cross cooperation between VNFs running on a similar server. In view of the above dialog, here the pre-instate an afﬁnity grid that deﬁnes if two VNFs have a high afﬁnity or anon-afﬁnity connection.
To measure effectiveness of the methodology (VNF-LLP) and contrast it and the other two calculations, here
- 30 VMs (each furnished with 16 vCPUs and 20 Gbps vNICs)
- 10 virtual switches,
- four physical NICs
- two physical switches.
Every algorithm was executed more than multiple times before looking at their presentation in a similar domain, for example, VMs assets, mentioned system capacities, organize idleness. Moreover, we likewise shifted VM asset usage rate to copy different operational focuses; different designs are as per the following:
Asset Demand Parameters: A VNF administration chain having 10 distinctive system capacities with various asset requests (CPU request: 1-5 centers, arrange transmission capacity request: 1-5 Gbps). All VNFs are connected consecutively (single association)
Remaining task at hand Parameters: VM asset usage rates are constrained by three distinct outstanding tasks at hand (light, medium and substantial). System Latency Parameters: All system inertness esteems are obtained or propelled from genuine arrangements announced by Oljira et al. (2016).
As referenced previously, the accompanying measurements were utilized:
Network Latency: The aggregate of system inactivity among VNFs in the mentioned VNF administration chain.
Lead Time: The time required to discover VMs which have enough assets to acknowledge a gathering of NFs.
Used VMs: The quantity of VMs used to place mentioned VNFs.
Used PMs: The quantity of PMs used to place mentioned VNFs.
The combination metric speaks to the division of conveyed VNF cases over the complete whole of VNFs mentioned by the general arrangement of SFC requests. So as to process the union parameter, which considers the quantity of put VNF cases on the system framework over the whole of all mentioned VNF occurrences in the full arrangement of requests. The conglomeration metric speaks to the part of physical connections used to have the virtual connections required by all SFC demands, over the full arrangement of mentioned virtual connections. So as to figure this parameter, we incorporate the paired variable which takes an estimation of one when the physical connection.
Experiment Design and results
This sections discusses the design which ﬁnds the situation of the VNFs, fastening them together so as to decrease the general measure of assets utilized in the system. For such a reason, we have seen that efﬁcient calculations for taking care of the container pressing issue are particularly appropriate for drawing nearer the online VNF arrangement issue in SDN/NFV situations . Giving a lot of things (i.e., mentioned VNF cases) to be embedded into the canisters (i.e., set of NFV hubs in the system), algorithm dependent on ﬁt methodologies, for example, FIRSTFIT, BESTFIT and WORSTFIT process a thing at any given moment in discretionary request and endeavor to put the thing in a container as per a specific methodology. On the off chance that no container is discovered, they open another canister and put the thing in it. The discussed novel algorithm, in particular VNF Low-Latency Placement (VNF-LLP), just as two other conventional algorithms (Best-Fit and First-Fit) to take care of the expressed system idleness issue. Dissimilar to a conventional methodology of asset portion that discovers hosts dependent on unadulterated asset request, for example, CPU and system transfer speed, the VNF-LLP specially looks for hosts to have the option to limit organize inactivity with generally lower lead time.
We propose the VNF-LLP algorithm to limit organize inertness by setting VNFs on VMs where the past system capacity is set giving that the VM has enough system and CPU assets. The VNF-LLP achieves its objective of limiting the general dormancy of administration chains through looking for exceedingly accessible VM gatherings set on the equivalent vSwitch. Algorithm 1 speaks to the pseudo-code of VNFLLP.VNF Best-Fit Placement Algorithm The VNF Best-Fit Placement (VNF-BFP) algorithm is to locate the best accessible VM by considering both VNF asset prerequisites and VM asset limits. Moreover, organize dormancy among VNFs ought to be beneath a characterized limit. Algorithm 2 speaks to its pseudo-code.
Input : A VNF Service Chain (SC) and list of all VMs(VMs)
Output: Low-latency VNF Placement
Sort All VMs according to the resource utilization rate
Seek a group of VMs which are relatively available within a specific vSwitch
foreach network function nf in SC do
foreach virtual machine vm in VMs do
Check Resource Demand of nf/Capacity of vm
if vm has enough resource then
if vm is previously used then
e VNF on the previous used vm
VNF on the vm which has the least network latency
Calculate Network Latency between Network Functions
Calculate Lead Time (Eqn. 5)
Keep Looking for vm.
Calculate SumNL(SC), SumLT(SC), SumVM(SC), SumPM(SC)
Optimum VNF adjustment location:
Input: A VNF service chain (SC) and a list of all virtual machines (VM)
Output: Optimal adjustment VNF placement
Sort all virtual machines according to the rate of utilization of resources
For each network function nf in SC
for each virtual machine vm in VM do
Check the resource demand of nf / Capacity of vm
if vm has sufficient resource, then
if the latency of the network is lower than THRESHOLD, then
Place VNF at the best vm
Calculate Network latency between network functions
Calculate the wait time to search for vm
If the optimal vm does not exist , increase THRESHOLD
Calculate the network latency between network functions
Calculate the delivery time
Continue searching vm
Calculate SumNL (SC), SumLT (SC), SumVM (SC), SumPM (SC)
Regardless of the way that this heuristic methodology does not give ideal arrangements, it beats the accurate methodology in algorithm time. Next, we determine the multifaceted nature of the proposed VNF algorithm. The general intricacy of Algorithm 1is basically dictated by the circle in lines 2–10 and the best fit capacity. In spite of the fact that this algorithm endeavors to utilize the ﬁrst permissible way (i.e., the one fulfilling limit and postpone necessities), to give the situation and fastening arrangement, in the most pessimistic scenario, this for circle repeats k times over the arranged rundown of ways between the entrance and departure hubs. Amid every cycle, it applies the algorithm so as to put each VNF example of the mentioned SFC. This is done to choose the hub where the mentioned VNF will be instantiated. In this work, an efﬁcient execution of the algorithm is utilized to figure the situations.
We initially present a fundamental model that does not consider dormancy confinements and pressure/decompression highlights. The reason of this decision is twofold. Initially, it permits a clearer clarification of the model and a well ordered presentation of the details that enable us to keep the model direct; we review that as of now without these two highlights, the model is a mix of a system structure and an office area. Second, in the algorithmic stage we utilized for settling (precisely) the model, we illuminate a succession of models with expanding multifaceted nature (fundamental model, with dormancy, with pressure/decompression), along these lines, this introduction permits to put in proof the quirks of each model.
Load adjusting: in the present model, each interest can utilize a solitary VNF for each sort. The model can be stretched out to permit per-VNF burden adjusting. On the off chance that the heap adjusting is neighborhood to a NFVI bunch, the adjustment in the model is little, in certainty it is essentially important to have some constant factors considering the amount of interest related to each VNF. On the off chance that the heap adjusting can be between various bunches, at that point it is important to broaden the model permitting different ways for each interest. Anyway such an augmentation is relied upon to a great extent increment the execution time.
Different VM formats: for straightforwardness, uniquely in contrast to (Zhu & Huang, 2017), we exhibited the model considering a coordinated correspondence among VNF and VM layouts (single format). All things considered, numerous VM formats can be considered in the model at the cost of expanding of one measurement/record all factors ordered on the VNF identifiers.
Core switch as a VNF: if the center directing capacity is additionally virtualized, i.e., if the NFVI hub and the system switch can be considered as a solitary physical hub that runs the center steering capacity, handling the total traffic autonomously of the interest, as a VNF, at that point we have to add a term relative to InFlow in addition to OutFlow.
Moreover, with the arbitrary steering and situation calculation, for each administration request, the calculation chooses one way haphazardly to be allocated to a solicitation. At that point arbitrarily pick one of the hubs with sufﬁcient limit with regards to the situation of VNFs. The yield arrangement will regard the requirements deﬁned in the LP approach and contrasted and the heuristic calculation yields.
This research project is concluded with discovering the placement of VNF with considering the VNF-LLP algorithm. This algorithm is more suitable in compare to other two generic algorithms. Initially, the research report has focused on the definition of VNFs and a VFN. Then throughout the whole paper, it discussed and demonstrated the reduction in network latency and lead time with considering and comparing several approaches, models, and algorithms. Further we have discussed several related works that are associated with the placement of VNFs in virtual machine. The corresponding gaps and limitations are also discussed. Then our comprehensive model is proposed for removing all these limitations and filling these gaps. Our model is focused on minimizing the network latency for network sensitive applications. This model is very efficient to complete its objectives with also considering the resource allocation constrains. As we have discussed three basic algorithms VNF-FFP, VNF-LLP, and VNF-BFP, our objective is quite satisfy by the outcomes. We have already discussed that the individual benefits of each algorithm. We have found that both generic algorithm VNF-FFP and VNF-BFP only focus on resource allocation and indirectly increase the network latency and lead time. On the other hand VNF-LLP is much better than both these algorithm as this algorithm directly deal with reduction in network latency and lead time. So, our proposed model can be adopted in order to optimize network latency for network sensitive applications. This whole assignment research is revealed that VNF-LLP algorithm is responsible to extensively reduce network latency by up to 63.20% with comparing to the generic algorithms (the VNF-BFP and VNF-FFP algorithms). The second one is associated with using number of virtual machines. If we will use more VMs in order to place VNFs can increase the network latency. Hence, before placing VNFs in VMs, we have to find out the right number of VMs and then we will allocate VNFs in VMs.
The future work will mainly focus on discovering the appropriate comprehensive model with using the best suitable algorithm in order to place VNFs into VMs. We have designed the model with using VNF-LLP algorithm which is directly involved for deploying VNFs into VMs. Depending on the scenario, this algorithm will change and re-assign the VNFs in order to change the latency. If we talk about the future work with considering the most comprehensive algorithm (VNF-LLP algorithm), we will include two aspects for future purpose. The first one is associated with the adopting different types of network functions in order to perform experiments with more accuracy. The second one is associated with the exploration of the VNF migration policy. This policy will be involved for dynamically migrate VNFs between two or more VMs in order to obtain low network latency in case of real time applications.
This part presents is going to design the experiment and conclude the outcomes with the basis of the experiment. As we have seen on the methodology part, three basic algorithms are encountered for supporting our research objectives. As we have also seen that several algorithms are proposed for reducing the network latency, we have tried to make these algorithm more effective for reducing the network latency with improving the outcomes of our experiments. We have used VNF Best-Fit Placement Algorithm, VNF First-Fit Placement Algorithm, and VNF Low-Latency Placement Algorithm. In case of first-fit algorithm placement, allocators always have a list of free blocks and when it receives a request, it scans the first block in order to satisfy the request. This algorithm is quite good and it ensures that the address allocation is faster. In case of VNF Best-Fit Placement Algorithm, we have seen that this algorithm is very effective in order to find out the best suitable VM with considering two main aspects VNF resource allocations and VM resource capacities. The third algorithm i.e. VNF Low-Latency Placement Algorithm is directly connected with reducing network latency with the placement of VNFs on VMs where the VM has sufficient network and CPU resources (Baumgartner, Reddy, and Bauschert, 2015). We have already design and develop the VNF placement in virtual machine in methodology chapter. We also have represented the whole architecture of our proposed system. Now we will design the experimental outcomes and will discuss the result. After that we will give an in-depth look at the optimal results achieved by our proposed model.
In order to evaluate the network latency among three algorithms, we demonstrate the effectiveness of our proposed algorithm by comparing it with generic algorithm. We have make a table below according to the scenario that will help to understand the network latency and lead time corresponding algorithms.
|Scenario||Algorithm||Network Latency||Lead Time||VMs||PMs|
In this above table, it is clearly shown that the network latency is almost 64% is lower with compared to other two algorithms. Hence, it is clearly proved that the VNF-LLP algorithm is very effective to reduce the network latency. In case of heavier load, the e VNF-LLP algorithm perform better and gives a perfect reduction in network latency and lead time. The basic reason behind it is completely associated with the definition of the algorithm. This algorithm deals with latency aware VNF placement where other two algorithms are associated with resource availability-aware VNF placement. Above mentioned each scenario (weather it is load, heavy or medium) is showing a better reduced network latency and reduced lead time for VNF-LLP.
We have measured the response time of each request in the workload. After that we have applied our proposed three different algorithms in order to evaluate the result of lead time. As we have seen that VNF-FFP algorithm and VNF-BFP algorithm did not focus on reducing the network latency, they also did have less focus on the lead time. VNF-LLP algorithm has comparatively higher lead time for obtaining optimal solutions. Previous works has been shown that the higher lead time is very effective for the VNFLLP algorithm for completing several computational works. This attributes helps to find out the suitable virtual machines which will deliver the reduced network latency. The aforementioned fig.1 is clearly showing the increment in lead time which directly connected with the performance of virtual machines.
With the presence of VNF LLP algorithm, we need a less number of VMs in order to place the same set of network functions compare to other two algorithms. So, using this algorithm is quite cost effective. The basic reason behind this conclusion is directly related with the strong correlation between VMs and network latency which is encountered in our experiment. We also have encountered that if we will use lots if VMs, the network latency can be increased. As the number of VMs is increased, the network complexity is increased. Hence, the summation of network latency is also increased. So, we have to focus on number of using VMs with focusing on network latency. If the number of VNFs is placed more in VMs, network latency can be higher than the scenario where less number of VNFs is placed in VMs.
Now we will focus on minimizing the lead time with concerning about the number of inspection of VMs in order to place a set of VNFs. Basically, the usage less number of VM inspection, helps to reduce the lead time. As network latency and lead time are directly connected, the number of inspections considerably influence (reduce) the total amount of network latency.
We have discussed three different VNF placement algorithms for comparing and finding the best suitable algorithm in order to satisfy our research objective. We understand that each generic algorithm has benefits in its own field. We are trying to search the benefits of each one with respect to our objective i.e. reduction in network latency and lead time. Although two generic algorithms have several benefits over our objectives but failed to apply without doing any modifications in case of achieving the low network latency. In spite of, both these two type of generic algorithms the VNF-FFP and VNF-BFP deal with resource allocation and utilization rate and fail to properly deal with network latency. Hence, these two algorithms are responsible for increasing the network latency.
VNF-LLP has larger lead time as compared with the VNF-FFP, the VNF-LLP considerably condensed network latency by at least 18% that varies for several scenarios. We have already seen in table 1 that network latency and lead time vary across different scenarios and algorithms. With the usage of VNF-LLP algorithm, in case of heavier the load, the improvement in performance is more acceptable. In case of light loads, we have seen that VNF-FFP resulted in 20% more network latency when we have increased the lead time by 29%. In case of medium loads, the reduction in network latency and increasing in lead time is directly connected and legitimately balanced such as 18%. Finally, in case of heavy load, the algorithm VNF-LLP has several benefits in terms of reducing network latency. When VNF-FFP resulted in 99% of network latency and only have 35% of the lead time, VNF-LLP is very efficient to reduce the network latency and also the lead time. Hence, it is concluded that although VNF-LLP delivers effective outcomes with comparing to VNF-FFP in case of light to medium loads, but it is not appropriate for heavier loads. Hence, according to the experimental result, it is clear that the comprehensive model approach is working properly in order to deliver result which consist approximately 9% − 10% more than the optimal latency. Thus, we can say that our proposed algorithm is very effective to use in order to satisfy our main objective of this research project.
|Task name||Duration||Start||Finish||Resource Names|
|VNF Best-Fit Placement Algorithm (VNF-BFP)||2 days||12th May||14th May||JAVA|
|VNF First-Fit Placement Algorithm (VNF-FFP)||3 days||15th May||18th May||JAVA|
|VNF Low-Latency Placement Algorithm (VNF-LLP)||3 days||19th May||21st May||JAVA|
|Finding the appropriate model algorithm||1 day||22nd May||22nd May|
|Designing the comprehensive model||3 days||22nd May||25th May||Networking|
|Adopting VNF-LLP in our proposed model||3 days||26th May||28th May||Networking|
Cao, J., Zhang, Y., An, W., Chen, X., Han, Y., & Sun, J. (2016, December). VNF placement in hybrid NFV environment: Modeling and genetic algorithms. In 2016 IEEE 22nd International Conference on Parallel and Distributed Systems (ICPADS) (pp. 769-777). IEEE.
Ashrafi, S. (2018). U.S. Patent Application No. 15/689,769.
Agarwal, S., Malandrino, F., Chiasserini, C. F., & De, S. (2019). VNF Placement and Resource Allocation for the Support of Vertical Services in 5G Networks. IEEE/ACM Transactions on Networking, 27(1), 433-446.
Xia, M., Shirazipour, M., Zhang, Y., Green, H., & Takacs, A. (2015). Network function placement for NFV chaining in packet/optical datacenters. Journal of Lightwave Technology, 33(8), 1565-1570.
Yala, L., Frangoudis, P. A., & Ksentini, A. (2018, December). Latency and availability driven VNF placement in a MEC-NFV environment. In 2018 IEEE Global Communications Conference (GLOBECOM) (pp. 1-7). IEEE.
Patel, A., Vutukuru, M., & Krishnaswamy, D. (2017, November). Mobility-aware VNF placement in the LTE EPC. In 2017 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN) (pp. 1-7). IEEE.
Cziva, R., Anagnostopoulos, C., & Pezaros, D. P. (2018, April). Dynamic, latency-optimal VNF placement at the network edge. In IEEE INFOCOM 2018-IEEE Conference on Computer Communications (pp. 693-701). IEEE.
Baumgartner, A., Reddy, V. S., & Bauschert, T. (2015, September). Combined virtual mobile core network function placement and topology optimization with latency bounds. In 2015 Fourth European Workshop on Software Defined Networks (pp. 97-102). IEEE.
Cho, D., Taheri, J., Zomaya, A. Y., & Bouvry, P. (2017, June). Real-time virtual network function (VNF) migration toward low network latency in cloud environments. In 2017 IEEE 10th International Conference on Cloud Computing (CLOUD) (pp. 798-801). IEEE.
Zhu, H., & Huang, C. (2017, September). Cost-efficient VNF placement strategy for IoT networks with availability assurance. In 2017 IEEE 86th Vehicular Technology Conference (VTC-Fall) (pp. 1-5). IEEE.
Luizelli, M. C., Raz, D., & Sa’ar, Y. (2018, April). Optimizing NFV chain deployment through minimizing the cost of virtual switching. In IEEE INFOCOM 2018-IEEE Conference on Computer Communications (pp. 2150-2158). IEEE.
Huang, G., Wang, S., Zhang, M., Li, Y., Qian, Z., Chen, Y., & Zhang, S. (2016, November). Auto scaling virtual machines for web applications with queueing theory. In 2016 3rd International Conference on Systems and Informatics (ICSAI) (pp. 433-438). IEEE.
Oljira, D. B., Brunstrom, A., Taheri, J., & Grinnemo, K. J. (2016, December). Analysis of network latency in virtualized environments. In 2016 IEEE Global Communications Conference (GLOBECOM) (pp. 1-6). IEEE.
Li, X., & Qian, C. (2016, January). A survey of network function placement. In 2016 13th IEEE Annual Consumer Communications & Networking Conference (CCNC) (pp. 948-953). IEEE.
Benkacem, I., Taleb, T., Bagaa, M., & Flinck, H. (2018). Optimal VNFs placement in CDN slicing over multi-cloud environment. IEEE Journal on Selected Areas in Communications, 36(3), 616-627.
Sahoo, J., Salahuddin, M. A., Glitho, R., Elbiaze, H., & Ajib, W. (2017). A survey on replica server placement algorithms for content delivery networks. IEEE Communications Surveys & Tutorials, 19(2), 1002-1026.
Gupta, L., Jain, R., Erbad, A., & Bhamare, D. (2019). The P-ART framework for placement of virtual network services in a multi-cloud environment. Computer Communications.
Taleb, T., Samdanis, K., Mada, B., Flinck, H., Dutta, S., & Sabella, D. (2017). On multi-access edge computing: A survey of the emerging 5G network edge cloud architecture and orchestration. IEEE Communications Surveys & Tutorials, 19(3), 1657-1681.
Cho, D. (2017). Network Function Virtualization (NFV) Resource Management For Low Network Latency (Master’s thesis, University of Sydney)