Sunday, March 31, 2019
Multi-Campus ICT Equipment Virtualization Architecture
Multi-Campus ICT Equipment Virtualization ArchitectureMulti-campus ICT equipment practical(prenominal)ization computer architecturefor corrupt and NFV unified serviceAbstract- We advise a practical(prenominal)(prenominal)ization architecture for multicampus info and communication technology (ICT)equipment with integrated defile and NFV capabilities. The carriage of this device is to migrate most of ICT equipment oncampus expound into befoul and NFV chopines. Adopting thisarchitecture would make most of ICT serve secure andreliable and their calamity retrieval (DR) economicallymanageable.We also analyze a represent function and channelise cost advantages ofthis proposed architecture, describe carrying into action designissues, and report a overture experimentation of NFV DRtransaction. This architecture would set ahead academicinstitutes to migrate their own ICT systems set(p) on their set forth into a cloud environss.Keywords NFV, Data Center Migration, disaster Recovery,Multi-campus webI. INTRODUCTIONThere ar many academic psychiatric hospitals that curb multiplecampuses located in unalike cities. These institutions needto erect information and communication technology (ICT) work, such(prenominal)(prenominal) as E-learning operate, equally for all studentson all(prenominal) campus. Usually, information technology (IT)infrastructures, such as application servers, be deployed at a important campus, and these servers argon glide slopeed by students on separately campus. For this purpose, severally local atomic twist 18a net profit(LAN) on each campus is committed to a principal(prenominal) campus LANvia a virtual private ne iirk (VPN) over a huge argona lucre (WAN). In feelerory, meshwork portal service isprovided to all students on the multi-campus environment.To door the Internet, guarantor devices, such as firewalls andintrusion find oution systems (IDSs), be indispensable as they cheer computing resourcefulnesss from malicious cyber activities.With the emergence of virtualization technologies suchas the cloud computing1 and interlocking functionsvirtualization (NFV)2, 3, we judge that ICTinfrastructures such as compute servers, storage devices, and intercommunicate equipment butt be move from campuses to info internalitys (DCs) economically. Some organizations havebegun to move their ICT infrastructures from their ownpremises to outside DCs in order to improve auspices,stability, and reliability. Also, there are a lot of contributionsto archiving DR capabilities with cloud technologies 4, 5,6. Active-passive replication or bustling-active replication areexpected techniques that archive DR capabilities. In thesereplications, a tautologic postup system is unavoidablededicatedly at a petty(a) site. With migration retrieval 4,these backup resources great deal be shared among many users.These studies mainly focus on the application servers. While,integrated DR capability for ICT inf rastructures, bothapplication and electronic entanglement infrastructures, are still immature.We propose a multi-campus ICT equipment virtualizationarchitecture for integrated cloud and NFV capabilities. Theaim of this proposal is to migrate entire ICT infrastructureson campus premises into cloud and NFV platforms.Adopting this architecture for multi-campus networks wouldimprove access contact lens utilization, security device utilization,network transmission baffle, disaster tolerance, andmanageability at the aforesaid(prenominal) magazine.We also analyze the cost function and plant costadvantages of this proposed architecture.To adjudicate the feasibility of our proposed architecture,we built a bear witness fuck on SINET5 (Science InformationNETwork 5) 7, 8, 9. We describe the test-bed design,and preliminary experimentation on reducing the recoverytime of VNF is reported.The rest of this paper is organized as follows. ingredient IIshows background of this work. Section III shows proposedmulti-campus network virtualization architecture. Section IVshows an rating of the proposed architecture in terms ofcost advantages and implementation results. Section Vconcludes the paper, and future work is discussedII. BACKGROUND OF THIS WORKSINET5 is a Japanese academic backbone network for near 850 research institutes and universities and providenetwork services to intimately 30 million academic users.SINET5 was alone constructed and put into operation inApril 2016. SINET5 plays an important role in sustenance awide range of research fields that need high-performanceconnectivity, such as high-energy physics, nuclear fusionscience, astronomy, geodesy, seismology, and computerscience. work out 1 shows the SINET5 architecture. It providespoints of presence, called SINET-data promenades (DCs), andSINET DCs are deployed in each prefecture in Japan. Oneach SINET DC, an internet protocol (IP) router, MPLS-TPsystem, and ROADM are deployed. The IP routeraccommoda tes access lines from research institutes anduniversities. entirely Every pairs of internet protocol (IP) routersare connected by a paier of MPLS-TP paths. These paths gives low latency and high reliability. The IP routers andMPLS-TP systems are connected by a 100-Gbps-basedoptical path. Therefore, data can be transmitted from aSINET DC to an otherwise SINET DC in up to 100 Gbpsthroughput. In addition, users, who have 100 Gpbs accesslines, can transmit data to other users in up to 100 Gbpsthroughput.Currently, SINET5 provides a direct cloud tie-inservice. In this service, commercial cloud providers connecttheir data contracts to the SINET5 with high-speed crosstie such as10 Gbps link directly. Therefore, academic users can accesscloud computing resources with very low latency and highbandwidth via SINET5. Thus, academic users can receivehigh-performance computer communication betweencampuses and cloud computing resources. Today, 17 cloudservice providers are directly connected t o SINET5 and morethan 70 universities have been using cloud resources directlyvia SINET5.To evaluate virtual technologies such as cloud computingand NFV technologies, we constructed at test-bed platform(shown as NFV platform in fig. 1) and bequeath evaluate thenetwork delay effect for ICT service with this test bed. NFVplatform are constructed at four SINET-DCs on major citiesin Japan Sapporo, Tokyo, Osaka, and Fukuoka. At each site,the facilities are composed of computing resources, such asservers and storages, network resources, such as layer-2switches, and restrainers, such as NFV orchestrator, andcloud controller. The layer-2 switch is connected to aSINET5 router at the same site with high speed link,100Gbps. The cloud controller configures servers andstorages and NFV orchestrator configures the VNFs on NFVplatform.And user can setup and release VPNs betweenuniversities, commercial clouds and NFV platformsdynamically over SINET with on-demand controller. Thison-demand control ler setup the router with NETCONFinterface. Also, this on-demand controller setup the VPN corelatedwith NFV platform with relievo interface.Today there are many universities which has multiplecampus deployed over wide area. In this multi-campusuniversity, many VPNs (VLANs), ex hundreds of VPNs, aredesired to be configured over SINET to extend inter-campusLAN. In order to satisfy this demand, SINET starts newVPN services, called virtual campus LAN service. With thisservice, layer 2 domains of multi-campus can be connectedas like as layer 2 switch using preconfigured VLAN rages(ex. 1000-2000).III. PROPOSED MULTI-CAMPUS ICT EQUIPMENTVIRTUALIZATION computer architectureIn this section, the proposed architecture is described.The architecture consists of two parts. First, we describe thenetwork architecture and clarify the issues with it. Next, aNFV/cloud control architecture is described.A. Proposed multi-campus network architectureMulti-campus network architecture is shown in Figure 2 .There are two bequest network architectures and a proposednetwork architecture. In legacy network architecture 1 (LA1),Internet avocation for multiple campuses is delivered to a maincampus (shown as a green line) and checked by securitydevices. by and by that, the internet dealing is distributed to eachcampus (shown as a blue line). ICT Applications, such as Elearningservices, are deployed in a main campus and access transaction to ICT application is carried by VPN over SINET(shown as a blue line). In legacy network architecture 2(LA2), the Internet access is different from LA1. TheInternet access is directly delivered to each campus andchecked by security devices deployed at each campus. In theproposed architecture (PA), the main ICT application ismoved from a main campus to an external NFV/cloud DC.Thus, students on both main and sub-campuses can accessICT applications via VPN over SINET. Also, internet worktraverses via virtual network functions (VNFs), such asvirtual router s and virtual security devices, located atNFV/cloud DCs. Internet traffic is checked in virtual securitydevices and delivered to each main/sub-campus via VPNover SINET.There are pros and cons between these architectures.Here, they are compared across five points access linkutilization, security device utilization, network transmissiondelay, disaster tolerance, and manageability.(1) Access link utilizationThe cost of an access link from sub-campus to WAN issame in LA1, LA2 and PA. While, the cost of an access linkfrom a main campus to WAN of LA1 is larger than LA2 and PA because redundant traffic traverses through the link.While, in PA, an additional access link from a NFV/cloudDC to WAN is required. Thus, evaluating the total access linkcost is important. In this evaluation, it is assumed thatadditional access links from NFV/cloud DCs to WAN areshared among multiple academic institutions who use theNFV/cloud platform and that the cost will be evaluatedtaking this sharing into accoun t.(2) Security device utilizationLA1 and PA is more efficient than LA2 because Internet traffic is concentrated in LA1 and PA and a statistically multiplexed traffic effect is expected.In addition to it, in PA, the amount of physicalcomputing resources can be suppressed because virtualsecurity devices share physical computing resources amongmultiple users. Therefore, the cost of virtual security devicesfor each user will be reduced.(3) lucre transmission delayNetwork delay due to Internet traffic with LA1 is longerthan that with LA2 and PA because Internet traffic to subcampusesis detoured and transits at the main campus in LA1,however, in LA2, network delay of Internet to sub-campusesis directly delivered from an Internet modify point on aWAN to the sub-campus, so delay is suppressed. In PA,network delay can be suppressed because the NFV and clouddata center can be selected and located near an Internetaccess gateway on WAN.While, the network delay for ICT application serviceswil l be longer in PA than it in LA1 and LA2. Therefore, theeffect of a longer network delay on the quality of ITapplication services has to be evaluated.(4) Disaster toleranceRegarding Internet service, LA1 is less disaster kindthan LA2. In LA1, when a disaster occurs around the maincampus and the network functions of the campus go down,students on the other sub-campuses cannot access theinternet at this time.Regarding IT application service, IT services cannot beaccessed by students when a disaster occurs around the maincampus or data center. While, in PA, NFV/cloud DC islocated in an environment robust against earthquakes andflooding. Thus, robustness is improved compared with LA1and LA2.Today, systems capable of disaster recovery (DR) aremandatory for academic institutions. Therefore, servicedisaster recovery functionality is required. In PA, back upICT infrastructures located at a secondary data center can beshared with another user. Thus, no dedicated redundantresources are requi red in steady state operation, so theresource cost can be reduced. However, if VM migrationcannot be fast enough to hold on services, active-passive oractive-passive replication have to be adopted. Therefore,reducing recovery time is required to adapt migrationrecovery to archive DR manageability more economically(5) ManageabilityLA1 and PA is easier to manage than LA2. Becausesecurity devices are concentrated at a site (a main campus orNFV/cloud data center), the number of devices can bereduced and improving manageability.There are iii issues to consider when adopting the PA.Evaluating the access link cost of an NFV/clouddata center.Evaluating the network delay effect for ICT services.Evaluating the migration utmost for migrationrecovery replication.B. NFV and cloud control architectureFor the following two reasons, there is strong demand touse legacy ICT systems perpetually. Thus, legacy ICTsystems have to be moved to NFV/cloud DCs as virtualapplication servers and virtual net work functions. One reasonis that institutions have veritable their own legacy ICTsystems on their own premises with vender special(prenominal) features.The second reason is that an institutions work flows are noteasily changed, and the same usability for end users isrequired. Therefore, their legacy ICT infrastructuresdeployed on a campus premises should be continuously usedin the NFV/cloud environment. In the proposed multicampusarchitecture, these application servers and networkfunctions are controlled by using per-user orchestrators.Figure 3 shows the proposed control architecture. Eachinstitution deploys their ICT system on IaaS services. VMsare created and deleted through the application interface(API), which is provided by IaaS providers. Each institutionsets up an NFV orchestrator, application orchestrator, and counsel orchestrator on VMs. two active and standbyorchestrators are run in primary and secondary data centers,respectively, and both active and standby orchestrat ors checkthe aliveness of each other. The NFV orchestrator creates theVMs and installs the virtual network functions, such asrouters and virtual firewalls, and configures them. Theapplication orchestrator installs the applications on VMs andsets them up. The management orchestrator registers theseapplications and virtual network functions to monitoringtools and saves the logs outputted from the IT serviceapplications and network functions.When an active data center suffers from disaster and theactive orchestrators go down, the standby orchestratorsdetect that the active orchestrators are down. They startestablishing the virtual network functions and applicationand management functions. After that, the VPN is connectedto the secondary data center being co-operated with the VPNcontroller of WAN.In this architecture, each institution can select NFVorchestrators that support a users legacy systems.IV. EVALUATION OF PROPOSED NETWORK ARCHITECTUREThis section details an evaluation of the a ccess link costof proposed network architecture. Also, the test-bedconfiguration is introduced, and an evaluation of themigration period for migration recovery is shown.A. Access link cost of NFV/cloud data centerIn this sub-section, an evaluation of the access link costof PA compared with LA1 is described.First, the network cost is defined as follows.There is an institution, u, that has a main campus and nusub-campuses.The traffic amount of institution u is defined as followsdifferent sites can be connected between a user site and cloudsites by a SINET VPLS (Fig. 7). This VPLS can be dynamically established by a portal that uses the RESTinterface for the on-demand controller. For upper-layerservices such as Web-based services, virtual networkappliances, such as virtual routers, virtual firewalls, andvirtual load balancers, are created in servers through theNFV orchestrater. DR capabilities for NFV orchestrator isunder deployment.C. Migiration period for disaster recoveryWe evaluate d the VNF recovering process for disasterrecovery. In this process, there are four steps. standard 1 Host OS installationStep 2 VNF image simulateStep 3 VNF configuration copyStep 4 VNF process activationThis process is started from the horde OS installation becausethere are VNFs that are tightly coupled with the forces OS andhypervisor. There are several kinds and versions of host OS,so the host OS can be changed to suite to the VNF. Afterhost OS installation, VNF images are copied into the createdVMs. Then, the VNF configuration parameters are adjustedto the attributions of the secondary data center environment(for example, VLAN-ID and IP address), and theconfiguration parameters are installed into VNF. After that,VNF is activated.In our test environment, a virtual router can be recoveredfrom the primary data center to the secondary data center,and the total duration of recovery is about 6 min. Eachduration of Steps 1-4 is 3 min 13 sec, 3 min 19 sec, 11 sec,and 17 sec, respecti vely.To shorten the recovery time, currently, the standby VNFis able to be pre-setup and activated. If the sameconfiguration can be applied in the secondary data centernetwork environment, duck soup recovering is also available.In this case, Step 1 is eliminated, and Steps 2 and 3 arereplaced by copying a snap taw of an active VNF image,which takes about 30 sec. In this case, the recovering time isabout 30 sec.V. CONCLUSIONOur method using cloud and NFV functions can achieveDR with less cost. We proposed a multi-campus equipmentvirtualization architecture for cloud and NFV integratedservice. The aim of this proposal is to migrate entire ICTinfrastructures on campus premises into cloud and NFVplatforms. This architecture would encourage academicinstitutions to migrate their own developed ICT systems located on their premises into a cloud environment. Adoptingthis architecture would make entire ICT systems secure andreliable, and the DR of ICT services could be economicallymanageabl e.In addition, we also analyzed the cost function, andshowed a cost advantages of this proposed architecturedescribed implementation design issues, and reported apreliminary experimentation of the NFV DR transaction/
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.