Moving the Network to the Cloud: the Cloud Central Office Revolution and its Implications for the Optical Layer

11/16/2021
by   M. Ruffini, et al.
Trinity College Dublin
0

SDN and NFV have recently changed the way we operate networks. By decoupling control and data plane operations and virtualising their components, they have opened up new frontiers towards reducing network ownership costs and improving usability and efficiency. Recently, their applicability has moved towards public telecommunications networks, with concepts such as the cloud-CO that have pioneered its use in access and metro networks: an idea that has quickly attracted the interest of network operators. By merging mobile, residential and enterprise services into a common framework, built around commoditised data centre types of architectures, future embodiments of this CO virtualisation concept could achieve significant capital and operational cost savings, while providing customised network experience to high-capacity and low-latency future applications. This tutorial provides an overview of the various frameworks and architectures outlining current network disaggregation trends that are leading to the virtualisation/cloudification of central offices. It also provides insight on the virtualisation of the access-metro network, showcasing new software functionalities like the virtual DBA mechanisms for PON. In addition, we explore how it can bring together different network technologies to enable convergence of mobile and optical access networks and pave the way for the integration of disaggregated ROADM networks. Finally, this paper discusses some of the open challenges towards the realisation of networks capable of delivering guaranteed performance, while sharing resources across multiple operators and services.

READ FULL TEXT VIEW PDF

Authors

page 1

page 4

page 6

page 7

07/17/2020

Overview of Security of Virtual Mobile Networks

5G is enabling different services over the same physical infrastructure ...
07/10/2019

Increasing broadband reach withHybrid Access Networks

End-users and governments force network operators to deploy faster Inter...
07/10/2020

Improving Software Defined Cognitive and Secure Networking

Traditional communication networks consist of large sets of vendor-speci...
05/04/2022

Planning a Cost-Effective Delay-Constrained Passive Optical Network for 5G Fronthaul

With the rapid growth in the telecommunications industry moving towards ...
01/18/2022

A Min-Max Fair Resource Allocation Framework for Optical x-haul and DU/CU in Multi-tenant O-RANs

The recently proposed open-radio access network (O-RAN) architecture emb...
11/07/2020

An approach to define Very High Capacity Networks with improved quality at an affordable cost

This paper aims to propose one possible approach in the setting of VHCNs...
10/05/2018

Fixed-Mobile Convergence in the 5G era: From Hybrid Access to Converged Core

The availability of different paths to communicate to a user or device i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The SDN and NFV concepts have changed the way we design and operate networks. While they have only recently gained widespread dissemination, much of the underlying work that led to their success dates over twenty years back. If we look at the past, by the late 1990’s, the Internet, which was originally intended as a platform for research had become stagnant and closed to innovation. The critical routers and switches of large telecommunication operators and Internet Service Provider (ISP)s had become overblown monolithic platforms that required very specialist skills to configure and diagnose when things went wrong. Since the Internet was no longer just a research network, but was supporting critical commercial and government services, there was no clear governance over how systemic issues in the technology and processes could be rectified or how deficiencies could be improved. Thus, novel design ideas, such as that of programmable networks, started to rise in the late ’90s, for example with the idea of active networks [1], [2], where network nodes would expose their resources and capabilities through programming interfaces, allowing online modifications to routers’ behaviour. Shortly after, around the beginning of the 2000s decade, work started to appear on the separation of control and data planes, as network operators were looking for methods to increase their control over the network (e.g., with respect to legacy distributed routing algorithms), for traffic engineering purpose. On one hand, this led to the development of open interfaces between control and data plane, driven by projects like the Forwarding and Control Element Separation (ForCES) [3]. On the other hand, we saw the rise of centralised network control systems, including the Routing Control Platform (RCP) [4] and the Path Computation Element (PCE) [5]. These trends were also supported by new hardware developments at that time: the fast increase in capacity requirements drove vendors to develop packet forwarding logic in hardware, leading to its separation form the software control plane; in addition, the commoditisation of the computing platforms meant that general purpose servers could provide more memory and processing resources than embedded routing processors. This is also the time when vendors of commodity General Purpose Processors ) (i.e., Intel and AMD) included instruction sets in their processors for hardware support of virtualisation, up to the standard defined by Popek and Goldberg [6]. Prior to this, virtualisation required specific mainframe hardware (for instance, running an IBM/370 mainframe), or the customisation of applications and operating systems, a process which is both expensive and difficult to maintain. For an in depth survey of the road to SDN the readers should refer to [7].

Building upon this previous work, the introduction of the OpenFlow protocol [8], towards the end of the 2000s, started the SDN revolution in telecommunications networks. The novelty, with respect to previous attempts and ideas, was to provide Application Programming Interfaces that could interface with existing off-the-shelf hardware switches, rather than requiring new hardware. Taking advantage of the trend to build switches using merchant silicon, OpenFlow opened up a new world of possibilities for academic researchers. It provided the ability to develop and test new ideas over real networks, paving the way to many research projects across the globe developing new network protocols and functionalities. Within only a few years, SDN managed to grasp the attention of the data centre and telecomms industry. For operators, SDN is an enabler for the application of the DevOps principles and Continuous Integration/Continuous Deployment (CI/CD) to the management and configuration of their networks. Such principles were already being applied to other systems in their companies, such as databases, software platforms and web sites. The practice of Continuous Integration is where changes and releases of applications are continuously tested against a complete set of functional and unit tests. This facilitates the release of features to the production system or network on daily basis, and offers the ability to regress to a snapshot of a previous release easily. SDN can thus enable similar innovation on the network control and management operations.

In the mean time, in parallel with the control plane programmability offered by SDN, the concept of data plane programmability and virtualisation also continued to develop. Extending the concept of virtualisation of computing environment, where multiple virtual machines could run independently over the same server, the idea of a network hypervisor (or FlowVisor in OpenFlow terms) was brought in to allow different controllers to operate the same physical switch. When applied to a complex network of several nodes, e.g., in a data centre environment, network virtualisation enabled the creation of an abstraction layer that could be used to setup several virtual networks across shared hardware infrastructure. With the support of appropriate software [9], virtual networks could be dynamically created to connect virtual machines; they could be modified online when the associated virtual machines were migrated, or their number was increased or decreased. Coupled with SDN, network virtualisation enabled unprecedented network programmability and flexibility in the data centre environment.

The next step was the move of the SDN and virtualisation concepts out of the data centre, to the public network. Pressed by progressively squeezed operation margins, the operators had been looking for ways to reduce capital and operational expenditures and for means to generate new revenue in their network. NFV [10] aims at virtualising typical functionalities of telecommunications networks, so that they can be decoupled from the hardware. Since most of today’s network functions operate in the digital domain, NFV allows moving them from expensive dedicated hardware to commodity servers. In addition, the ability to run functions as software over a shared compute infrastructure, facilitates the creation of new services by dynamic composition of chains of Virtual Network Functions. The use cases are several: multi-tenancy, as different software instances can be handed to virtual operators, which can have greater control over their virtual slice; a multi-service platform, as different services can be provided with customised resources to meet their requirements of compute and networking capacity; improving the efficiency of running cloud Radio Access Networks (cloud-RAN), as the location of their protocol stack functions can be optimised depending on actual requirements (e.g., real user demand) and computing/networking resource availability.

Indeed much work is currently focusing on the development of NFV software platforms that promise to deliver a comprehensive infrastructure to handle the requirements of current and future telecommunications and service operators. The Next Generation Central Office (NGCO) is the generic term for the re-architecture of the Telco Central Office towards a fibre-rich, software-centric CO that benefits from the principles of virtualisation that have been developed in the data centre. Today we see many different projects from different organisations developing this idea into well-defined architectures and software systems, such as the Open Networking Foundation (ONF) Central Office Re-architected as a Data Centre (CORD) [11], the Open Platform for NFV (OPNFV)’s Virtual Central Office (VCO) [12] and the Broad Band Forum (BBF)’s cloud-CO [13]. CORD was one of the first projects to pioneer the use of NFV in access and metro networks, bringing it inside the central office. This idea has quickly attracted the interest of the networking industry, gaining in only a few years the support of several operators across the world, many of which have started carrying out network trials. This has also led to novel standardisation activities: for example the BBF has recently released the cloud-CO Technical Report (TR) [13].

Both VCO and CORD

are open source projects with real code bases, while the

cloud-CO defines standards, through TRs, for interoperability (for instance through YANG schemes) and for how the CO should function (for instance, sizing and scalability). Practically, both VCO and CORD use OpenStack as a virtualisation platform, however from the perspective of controllers, VCO uses OpenDayLight while CORD uses ONOS.

The concept of central office virtualisation brings together different network technologies, providing functional convergence for mobile and optical access networks and paving the way for its extension towards disaggregated optical networks (e.g., the use of ROADM white boxes at the access and metro network). However many challenges remain to be addressed in order to guarantee the quality of service required to run upcoming 5G applications, while multiplexing network and processing resources across multiple operators and services.

As it will be clarified in the next section, where we delve into the architectural details of central office virtualisation, while SDN and NFV can in principle operate independently, they are highly synergistic in a cloud-CO environment. Although their definition is often somewhat arbitrary, SDN’s task is typically that of providing a software interface to physical devices and thus enable the centralisation of the control plane across multiple devices. NFV on the other hand provides virtualisation of the physical hardware, to represent its functionality in software. NFV brings advantages associated to increased flexibility in the data plane, allowing cost reduction through the use of commodity servers and enabling resource slicing, thus assigning different network instances to different tenants and services. SDN, in parallel, brings in advantages of control plane flexibility, enabling coordination of slices both in single and multi-domain environments and facilitating multi-tenancy by offering network control to multiple entities through the use of programmatic interfaces. The meaning of slicing can be summarised by the 3GPPP definition [14] of "transforming the network/system from a static one size fits all paradigm, to a new paradigm where logical networks/partitions are created, with appropriate isolation, resources and optimised topology, to serve a particular purpose or service category or even individual customers". While the 3GPPP definition refers inherently to a mobile system (inclusive of Radio Access Network, Core Network Control Plane and User Plane Network Functions), in this paper the concept of slicing extends to additional technologies (e.g., optical backhaul/metro, layer 2 and layer 3 networks) that are part of the end-to-end service path.

In the reminder of the paper we will use the term cloud-CO interchangeably with VCO or NGCO, to refer to the general concept of virtualisation of the central office. In the next section, this tutorial paper will introduce a number of different development frameworks that implement the cloud-CO concept. We then extend the disaggregation to the optical layer, briefly mentioning some of its pros and cons and their importance towards the realisation of a fully virtualised network. After delving into some technical details on two use cases for network virtualisation and slicing, we give, in section V, a general overview of the economic benefit that the digitisation of the telecommunications industry could bring. Finally, we conclude the paper by exploring some of the outstanding challenges we believe should be addressed in the near future.

Ii Cloud Central Office Architectures

A modern CO is a network node that terminates residential and business subscriber lines. It typically contains equipment such as Digital Subscriber Line Access Multiplexers, used to terminate copper broadband lines; Optical Line Terminals, used to provide PON or point-to-point optical access; data aggregation equipment, typically operated through Ethernet switching; data transport equipment such as Multi Protocol Label Switching (MPLS) routers, used to provide transport data services within the operator’s network. It can also contain Broadband Network Gateways, which provides subscribers connectivity to the Internet.

A cloud-CO is a framework for bringing NFV into a telecommunications central office, where functions that typically run on dedicated hardware are moved to software frameworks running on commodity hardware.

Fig. 1: Classification of NFV-related development frameworks [15]

This moves the CO architecture towards that of a data centre. Its implementation relies on the development of several software components that closely inter-operate to deliver an end-to-end solution. The diagram in Fig. 1 (re-drawn from source [15]), is an attempt to map the functionality of some of these components with respect to the infrastructure, management & control plane stack and services. The figure is organised as a layered structure, with layers representing different levels of abstraction for a generic NFV system (i.e., rather than the typical OSI layered network structure). As we move up from the Disaggregated Hardware towards the Application layer, each layer provides further levels of component abstraction. Although many of the projects named in the figure are hosted by the Linux foundation, they originated independently, driven by different industry organisations, different business divisions and following different standardisation efforts and thus there can be substantial overlap in functionality across them. We can also observe that while most of the projects operate on single layers, three of them, CORD, Open Network Automation Platform (ONAP) and OPNFV operate across multiple ones. While an in-depth overview of each project is outside the scope of this paper, we provide a brief description of some of the most popular ones across the layers.

  • The Telecom INFRA project (TIP) [16]: operates at the disaggregated hardware layer, aiming at opening up the transmission system and separating its hardware and software components. While this idea resembles much that of OpenFlow, TIP goes far beyond the layer2/layer3 switch, aggregating multiple sub-projects each addressing a different technology, covering wireless and wired transmission and from access to backhaul and core networks. For example, the Open Optical & Packet Transport sub-project provides an Open Line System (OLS) specification for validating interoperability of optical transponders across multiple vendors.

  • The Data Plane Development Kit (DPDK) [17]: is a set of user space libraries that can be used to speed up packet processing operations in general purpose processors. This is achieved through a number of optimisations, such as bypassing the Linux kernel and using technology like Direct Memory Access (DMA) and polling of devices to avoid processing interrupts. Recent DPDK performance reports [18] show its ability to process over 70 millions packet per second on a server with high-end Intel Xeon processor and Intel Ethernet network adapters.

  • The ONOS controller [19]: is a network controller driven and supported by the ONF and developed specifically for network and service providers. Some of its distinctive characteristics are design for high availability and resiliency, and high scalability to support several millions request at the northbound interface and low latency response time for network events. Its southbound interfaces are extensive and include most recent protocols such as OpenFlow, Network Configuration Protocol (NETCONF), P4, RESTCONF, while supporting well established ones, such as Border Gateway Protocol (BGP), Simple Network Management Protocol (SNMP), Transaction Language 1 (TL1) and Command Line Interface (CLI). Perhaps, one of the most interesting features recently implemented is the intent-oriented framework, which allows users to specify their control plane request in terms of policy rather than specific actions. This provides ONOS with flexibility in selecting the most adequate actions to meet the requirements of a given request (e.g., a connection of X Gb/s capacity and Y ms maximum latency between two end points) and to respond appropriately should a network failure or congestion occur. Finally, ONOS was the network controller of choice for the CORD project, described further below.

  • The Openstack [20]: is a cloud computing platform for the virtualisation of computing resources, typically used in data centres or smaller clusters. The system is made up of several components, each managing a different aspect or service of the cloud. For example, the "Neutron" system manages the networking aspect, "Nova" manages the computing resources, "Cinder" the storage system, "Keystone" provides user authentication services, etc. OpenStack enables integration of hardware components form multiple vendors and is arguably the most common open source cloud computing system to date.

  • The Open Source MANO (OSM) [21]: is an Open Source implementation of the ETSI NFV "Management & Orchestration" reference architecture [22]. Its architecture is divided into three main functions, whose descriptions are briefly reported below, showing how services are mapped into sets of virtual functions and subsequently into physical hardware resources.

    The Virtual Network Function Manager (VNFM) manages VNF instances across their lifetime, operating tasks such as instantiation of VNFs, scaling them with respect to resource usage and terminating them when no longer needed.

    The Network Function Virtualisation Orchestrator (NFVO) sits on top of the VNFM and operates the task of selecting the correct VNFs and chaining them together to provide the service requested by a user or application. It also provides service monitoring, to assure the compliance with the given requirements. In OSM the NFVO and VNFM are embedded in a module called Network Service Orchestrator.

    The Virtualized Infrastructure Manager (VIM) is a virtualisation framework whose main task is to keep an inventory of what physical resources (compute, storage and network) are assigned to the virtual network functions instantiated by the VNFM. OSM offers the possibility to use different VIM implementations. The VIM module developed by OSM is the OpenVIM, which follows ETSI recommendations [23] and represents a minimalist and lightweight implementation of VIM functionalities. OpenStack can also be used (i.e., instead of OpenVIM), offering additional flexibility and expandability, which however comes at the cost of a more heavyweight implementation (OpenStack was indeed designed to handle large-scale compute, storage and networking resources). Plugins for other VIM implementations are also supported, such as Amazon Web Services (AWS) and VMware vCloud Director (vCD). In terms of controllers support, the OSM can operate with OpenDaylight, ONOS or Floodlight.

As previously mentioned, in Fig. 1 we also find a number of projects spanning more than one layer, which typically make use of some of the components described above to provide a usable framework capable of implementing the cloud-CO concept. We describe below three main projects that have recently gained visibility across the telecommunications industry.

  • The Central Office Re-architected as a Data Centre (CORD) [11]: is arguably the first project that attempted to implement in software a virtualised central office. CORD used an implementation-oriented approach, in that it minimised the time spent in defining architectural design and instead provided quickly after their launch an initial proof of concept, creating ferment among operators. Their success was endorsed by the fact that within few months from the lunch of their consortium, several major industry players joined their effort as partners or collaborators. As shown in Fig. 1, CORD covers both the virtual network management and the control plane (making use of ONOS for the latter). Their platform development is differentiated into three main application areas, providing use case specific implementations. The Residantial CORD (R-CORD) aims at providing broadband services and leads the development of the Virtual OLT Hardware Abstraction (VOLTHA), which virtualises the passive optical network. The Mobile CORD (M-CORD) provides solutions for the virtualisation of mobile networks, providing cloud Radio Access Networks (C-RAN) implementation with a number of different functional splits. The Enterprise CORD (E-CORD) targets enterprise services, providing connectivity on demand to business customers (e.g., implementing Virtual Private Networks, Software Defined Wide Area Network (SD-WAN), etc.). While CORD has to date differentiated across these three main areas, they are all based on the same overall framework and can reuse the same hardware infrastructure (e.g., commodity servers, storage and switches), with some differentiation on the I/O devices (i.e., a physical layer OLTs for R-CORD, Remote Radio Units for M-CORD or ROADMs and transponders for E-CORD).

  • The Open Platform for NFV (OPNFV): is a carrier-grade open source reference platform for NFV. Similarly to CORD, OPNFV reuses open source components from the protocol stack in Fig. 1. However, while CORD is focused on pre-defined use cases and operates a specific selection of elements to provide production-ready solutions, OPNFV has an open approach and relies on the platform end users for selecting and integrating the components to be used. In addition, OPNFV is based on the OpenDaylight network controller and provides extensive support for the BGP. With reference to Fig. 1, we can see that OPNFV covers all the layers of the virtualisation stack, confirming its main target of bringing together all NFV-related activities into a coherent reference platform. In this perspective, CORD’s focus on specific use cases has led its consortium to consider its participation in future releases of OPNFV (i.e., to produce CORD-based scenarios of OPNFV).

  • The Open Network Automation Platform (ONAP) [24]: is a project developing an NFV platform that was recently formed by the fusion of Open Orchestrator (OPEN-O) led by China Mobile and Enhanced Control, Orchestration, Management & Policy (ECOMP) led by AT&T. ONAP focuses on the management aspects, covering layers from the network data analytics down to the network control. It provides the ability to specify both orchestration and control frameworks (to automatically instantiate services) and an analytic framework for monitoring performance of the services created.

In order to put the three platforms above into perspective, we should notice that while they serve in principle the same high-level objective of Network Orchestration through NFV and SDN, OPNFV may be seen as an architecture driven approach backed up by extensive test suites, while CORD may be seen as a use-case driven approach. ONAP uses a top-down approach (building on the extensive ECOMP functionality) with specific emphasis on enterprise requirements such as end-to-end and automated service activation. We should also notice that the three platforms have all been developed independently and only recently brought within the Linux foundation umbrella. While their development carries on autonomously, their consortia have operated liaison efforts to provide future interoperability across the platforms. For example, as system integrator, OPNFV could provide interoperability and end-to-end performance validation for the entire platform, while ONAP could be adopted for the management and control aspect of NFV. Liaisons are also ongoing between CORD and OPNFV, that could see CORD-based scenarios, focused on the use of merchant silicon and white-box switches, being included in future OPNFV releases.

Among the benefits that the combined use of NFV and SDN has brought to the telecommunications world, one that stands out is the ability to provide convergence across multiple network domains in an unprecedented fashion. A representative use case is the integration of fixed and mobile networks, where for example the scheduler of a PON and that of a mobile Base Band Unit (BBU) can be synchronised to reduce the transmission latency over the PON in a cloud RAN implementation [25, 26]. Another example [27] is the dynamic adaptation of the PON assured rate and mobile fronthaul rate, according to the actual mobile cell load. Several other examples exist, spanning from the convergence of mobile, optical and cloud compute resources [28], to multi-tenancy application of the cloud-CO [29], [30].

Recently, this network convergence and virtualisation trend has progressed further down the stack, to the optical transmission layer, with the aim of opening up the transmission systems, including transponders, ROADMs, amplifiers, etc. This concept, called optical network disaggregation, is addressed in the next section, which discusses how the cloud-CO revolution is affecting the optical transmission layer.

Iii Optical Layer Control and Disaggregation

Many future use cases for the development of novel connected applications, require the ability to provide instant capacity across the network. These vary from the provisioning of point-to-point bare capacity on demand to a business, to the establishment of an end-to-end VNF chain from an end user to an edge or centralised computing centre; or to the migration of network functions and virtual machines across the access and metro area to fulfil capacity, latency and availability requirements of specific applications. In addition, the ability to provide dynamic capacity reconfiguration is essential to enable statistical multiplexing of networking and computing resources, which are typically constrained by economics. Resource virtualisation platforms can quickly allocate capacity on demand, as far as the underlying physical layer has enough available resources. In data centres, such physical layer capacity is typically pre-provisioned, since in this environment ultra-dense and high-capacity connectivity is affordable. However, this is not the case for telecommunications networks, where fast dynamic reallocation of physical capacity will be a requirement to provide cost-effective, high-capacity and low latency connectivity, across multiple domains, to a cluster of new applications that will bring new revenue into the network [31, 32].

The multi-domain challenge was partly addressed by work proposing SDN orchestration of capacity across domains. The concept has been demonstrated with respect to both inter-domain control plane [33] and data plane across different network technologies (i.e., involving optical packet switching and flexgrid) [34]. However, the optical layer was still considered a closed system, a black box with at best a proprietary network controller exposing northbound interfaces to the network orchestrator.

More recently, work in [35] has demonstrated the possibility to use southbound interfaces (e.g., OpenFlow) to directly control ROADM nodes and gather information from

Optical Signal to Noise Ratio

(OSNR) monitors to trigger re-routing at the control plane after physical link failure. This gives the controller the ability to carry out data analytics and the flexibility to directly control the physical transmission network. It can be seen as a step forward towards a completely open and disaggregated solution, where all optical sub-components are controlled through an SDN control plane [36].

The advantages of optical layer disaggregation are manifold and fit well the requirements of future network applications and services. From a cost perspective, optical disaggregation allows to source components separately, which in turns allows to avoid vendor lock-in and increases competition, which can drive down prices and improve component performance. It also carries advantages from a network convergence perspective, as it enables integration of systems developed from different vendors and for different network domains (i.e., access, metro and long haul). From a network performance perspective, a greater control over the optical transmission network can be used to reduce end-to-end latency, for example by enabling the orchestration of transparent wavelength connections between access, metro and data centre domains. The network orchestrator could for example provide ultra-low latency paths from a location in the network access to a data centre by controlling the optical systems in both domains, to dynamically provisioning a transparent path that is terminated directly in the server that carries out the data processing for the service [37] (this scenario is further discussed in the next section). A typical example is the dynamic allocation of C-RAN streams across edge and centralised computing [38], following the latest functional decomposition architectures from the Next Generation Mobile Network forum (NGMN) [39].

Fig. 2 illustrates the concept of full optical network disaggregation, where different network elements and control systems can be supplied by different vendors. While this might be by some considered the ultimate goal of an "Open Roadmap" [40], current developments are taking up intermediate steps towards this goal. The OLS for example assumes that the line system (inclusive of the line system control with reference to Fig. 2) is provided by one vendor, but transponders can be sourced by different suppliers. The aim of OLS is thus to guarantee interoperability between any transponders and line systems. A further step should see the opening of APIs to provide control to external SDN systems. The OpenROADM [41] specification, driven by a consortium of ROADM vendors, envisages the use of NETCONF with YANG data models to provide the controller with an abstract representation of the devices. In addition, the OpenROADM specification aims at disaggregating the OLS, providing abstraction for functions carried out by ROADMs, transponders, pluggable optics, in line amplifiers and muxponders.

The ONF-based Open and Disaggregated Transport Network (ODTN) [42], mostly driven by operators, also aims at providing open APIs and OLS disaggregation (although its first phase only addressed point-to-point transmissions systems), but puts a stronger emphasis on the use of largely adopted interfaces, such as the Transport API (TAPI) as northbound and OpenConfig-based models as southbound interfaces. In addition, ODTN uses ONOS as reference controller. (Additional information on network models in support for optical disaggregation can be found in [43]).

Fig. 2: Illustration of the principle of full optical disaggregation

While there are benefits to the disaggregation of the optical layer, we notice that its implementation has lagged behind with respect to the rest of the network. The main reason is arguably that there is a fundamental difference between virtualising layers operating in the digital domain (e.g., above L2) and the optical transmission layer that operates in the analogue domain and needs to address optical transmission impairments. Thus, before the optical layer can be fully opened and its control system integrated with the rest of the layers, there are challenges to be solved. For example, in a closed optical communications system the same vendor provides transponders, in line amplifiers and ROADM nodes, thus the variability and unpredictability of the system is minimised. This is important especially for longer links (e.g., in the long-haul) where the available optical margins are squeezed to a minimum. Thus, when we attempt to open up the optical systems, making use of components from different vendors, typically the variability and uncertainty of performance of an end-to-end path increases, reducing optical margins and making the network less efficient. As of today there are ongoing discussions on the feasibility and benefits of disaggregating optical networks, and most recognise the existence of a trade-off between the need for increasing the amount of transmission monitoring components to reduce the system uncertainty, and the cost they add to the network. Studies in [44]

provide some evidence that the benefits of optical layer disaggregation are more likely to occur in the metro area, where the optical margins are less strict than in the regional and core networks. In addition, much work was recently carried out on the use of machine learning techniques to improve prediction of

Quality of Transmission (QoT), so that the expected performance of a new path can be assessed before it is provisioned ([45, 46, 47, 48], only to cite a few; the reader can also refer to [49], [50] for comprehensive tutorials on this topic). Overall, while this is a promising area of research, more work is required on assessing how to collect, store and share adequate data sets to train the machine learning algorithms.

In conclusion, work on the disaggregation of optical networks is at an early stage but ongoing, with several consortia involved in the definition of interfaces or interoperability specification.

Iv Sample use cases

If we look at the progress carried out in the cloud networking area (e.g., at the network virtualisation for data centre domains), we see mature products (e.g., the VMware network virtualisation and security platform - NSX [9]) being deployed, allowing a complete abstraction of the network, thus enabling full portability of the sub-networks across racks and data centres, improved resilience, enhanced security, etc. Moving down the stack, we see research oriented to the virtualisation of physical transmission devices for telecommunications networks, aiming to provide high granularity, isolation and quality differentiation of slices sharing the same physical channel, with examples both in the wireless [51] and optical [52] domains.

Virtualisation and slicing will play a fundamental role in future generation of networks, allowing network customisation down to the flow levels to provide personalised network services to specific classes of applications and to enable true multi-tenancy. However, their full performance can only be delivered if the underlying physical transmission channel is capable of quickly adapting paths and capacity to the requirements of the layers above. This is especially true in the access and metro parts of the network, where the large number of end-points does not allow to pre-provision all necessary capacity between them. Incidentally, these are also the parts where optical layer disaggregation seems more practical [44]. While work is ongoing on the definition of frameworks, architectures and interfaces [41, 16, 42], more research is needed in dynamic provisioning of QoS-oriented (e.g., with bounded latency and availability levels) physical layers links across different technological domains. These include the mobile access, the fixed optical access, the optical metro transport and the computing domains (both at the edge and in large centralised Data Centres).

In this section we provide technical details of two specific use cases of resource slicing in next generation cloud-COs.

Iv-a Pon scheduling virtualisation for multi-service and multi-tenant applications: the virtual DBA (vDBA)

The first use case focuses on the virtualisation of a specific component of a cloud-CO architecture, the DBA running in the OLT network element. As mentioned above, R-CORD was the first project to propose and implement a virtualised OLT as part of their central office virtualisation framework. However, in their implementation the DBA for upstream scheduling of capacity is implemented in the hardware and runs as a single instance. In [53], our research group at Trinity College Dublin (TCD) has introduced the concept of DBA virtualisation, and proposed and developed a testbed implementation architecture in [54]. vDBA provides a mechanism to move scheduling algorithms from a single instance running in hardware into multiple instances running in software. This allows different Virtual Network Operators sharing the same physical PON infrastructure to fully control, down to the intra-frame microsecond time scale, the capacity scheduling of their access network slice. For example, addressing a possible 5G use case, a VNO could run two different scheduling algorithms, one optimised for customers requiring low latency operations and another optimised for efficient use of resources. The vDBA concept was ratified in the BBF standard TR-402 "Functional Model for PON Abstraction Interface" [55].

Our testbed was implemented by linking a server running the vDBA mechanism in software to an OLT running on Field Programmable Gate Array (FPGA) hardware, implementing XGS-PON framing and 10Gb/s optical transmission. The vDBA allows more than one DBA algorithm to run in parallel, each producing its own virtual Bandwidth Map (BMap). These are forwarded to a second function, called Merging Engine (ME) which merges them together producing a physical BMap that is forwarded to all Optical Network Units over the physical layer.

Fig. 3: Experimental vDBA demonstrator developed at TCD

Our implementation, shown in Fig. 3 uses OpenStack as a virtualisation framework on top of which the vDBA and ME functions run. OpenStack provides extensive options on the choice of hypervisor and virtualisation technologies. Because the vDBA and ME functions are compute intensive and do not require much disk storage, we use lightweight virtualisation based on Linux Containers.

One of the main issues with off-loading the DBA to an upstream host is that, even for a low to medium sized PON tree, the hardware side of the OLT injects high levels of small datagram packets into the host’s network packet processing module (e.g., in the tens or hundreds thousand per second). Network cards and Linux network kernel modules can cater for large traffic streams of several gigabits per second when transmitted over large-sized packets. However, the small packet characteristic of the Dynamic Bandwidth Reporting Unit (DBRU) (potentially sent every frame by all ONUs to the OLT) can consume a large amount of GPP resources, because each incoming DBRU packet generates a hardware interrupt (which represents one of the main causes of delay in software packet processing systems).

Our implementation thus optimised the vDBA and ME modules for network packet processing [56]. Firstly, we use Single Root Input/Output Virtualisation (SRI-OV) to off-load packet processing interrupts to the network card, and thereby minimise interrupts to the CPU main processor and cores. We make extensive use of the DPDK software libraries, developed by Intel, to reduce the amount of unnecessary copying of data between memory on the same host (even if the memory segments are associated with different Virtual Machines) as well as to optimise locking in the reading from and writing to buffers and queues. We prevent real-time critical functions such as the Merging Engine from being continuously interrupted, by using DPDK to assign individual or complementary functions to distinct cores.

To demonstrate the functionality and the performance of our vDBA architecture, we generated a constant stream of DBRU traffic based on a combination of real traffic at the ONU and a traffic emulator, to reproduce the scenario of a PON with 32 ONUs. Fig. 4 shows the interval, in microseconds, between the transmission of successive BMaps, calculated over 30,000 bandwidth generation cycles. The average inter-transmission time takes into account the packet routing within the host platform between the Merging Engine and vDBA applications, as well as the processing time for the calculation of the constituent Bandwidth Maps by 2 VNOs, and the subsequent merging into a single bandwidth map by the Merging Engine. As it can be seen in the figure, we obtain minimal variation in the BMap

transmission time (with a calculated variance value of 0.75). This shows that the

vDBA BMap generation is highly stable, and moving from hardware to software does not deteriorate the PON performance.

Fig. 4: Bandwidth Map stability characterisation

Iv-B Cross-domain optical operations in dynamic access/metro/cloud environments

The second use case focuses on an end-to-end dynamic slicing scenario, from mobile access to edge or central cloud. For this we consider a future cloud-CO with dynamic optical switching capabilities and high densification of mobile cells, as shown in Fig. 5. Following the functional decomposition guidelines [39], different functional split options are available where Distributed Units and Centralised Units can be dynamically placed, depending on compute resources available and latency constraints. The cells are backhauled through a virtualised PON, which can provide links to both local computation elements (e.g., the Edge Cloud Node) and the cloud-CO, which is linked to the rest of the metro network and can provide connectivity to larger metro DCs.

(a)
(b)
Fig. 5: Future cloud-CO with dynamic optical layer scenario

The scenario reported in Fig. 4(a) shows a macro cell with co-located DU and CU, which does not require strict latency connection (shown in red colour); thus it is terminated at a physical OLT in the cloud-CO and from there it reaches its destination server through the shared electronic switch fabric (e.g., vOLT1 and then APP2 in the figure). Small cell 2 operates on a split number 7 [39] (i.e., the split option that divides the physical layer into two, the low PHY and the high PHY), which requires a low latency connection (shown in yellow) to the DU/CU, which runs in the Edge Cloud Node. It should be noticed that the PON splitter allows for optical signals to be tapped locally, so that specific wavelength channels can be terminated at a local Edge Cloud Node. From there, the data stream continues towards the cloud-CO with no strict latency constraints (blue line) and it is thus mixed with residential traffic and terminated at an OLT (e.g., vOLT2 in the figure). For small cell 1 instead, as there is no spare capacity available at the edge node, a transparent connection through the PON is required (shown in green). Since in this instance the cloud-CO cannot provide low latency connectivity through its electronic switch fabric, the link is operated as Point-to-Point (P2P) directly to the server where the BBU is located (e.g., the blocks labelled P2P, DU, and CU, respectively, in the figure). The detailed architecture of the optical switch fabric is not shown in the figure, but examples are available in works such as [57] addressing access/metro convergence and [58, 59] looking at hybrid electrical/optical switching in data centres. Also, in this use case no strict latency connection is required towards the metro data centre, which runs high-layer applications. A simple example of a PON splitter node implementation is also shown in the figure, with power taps that can link the edge node both to end-user access points and to the CO. More complex and flexible implementations using active wavelength steering (i.e., Wavelength Selective Switchs and ROADMs) could also be considered in some parts of the Optical Distribution Network (ODN) as their cost decreases over time.

Fig. 4(b) shows the same use case on a different time, where a high-priority application (APP4 in the figure) with low latency requirements needs to run for a user connected to small cell 1. The control plane needs thus to preempts small cell 2’s connection to free space to run DU, CU and application locally at the Edge Cloud Node for cell 1. Since at this time the cloud-CO cannot provide computational resources capable of guaranteeing the low latency processing at the DU for cell 2, the cloud-CO operates a full optical bypass [60], which provides a point-to-point transparent link connection (shown in yellow) of small cell 2 directly to a server in the metro DC 111Due to the latency constraints associated to a functional split of the physical layer, it is typically considered that the propagation distance from small cell to data centre might not exceed 40 km. However, the transmission latency constitutes only a small part of the latency budget, and this value could be increased if the optical bypass can reduce latency of other operations. For example, we should consider that an optical transparent link will remove the latency associated with electronic switching in the DC. In addition, the ability to provide more powerful processing resources in the metro DC would decrease the overall processing latency, leaving a higher latency margin for the optical link transmission.. This terminates on a P2P transceiver directly in the server (or rack) where the BBU functions and applications are processed (labelled P2P, DU, CU and APP3, respectively, in the figure).

V The business case for telecomms digitisation

As it can be expected, the main driver towards virtualisation of the operators’ networks is the prospective increase in network cost-effectiveness and revenue streams. While there is today still uncertainty on which aspects of the central office virtualisation will provide the largest economic advantage, network strategists have identified some of the main features that will drive cost reduction and revenue increase across the entire telecommunications digitisation process. The World Economic Forum (WEF

) Digital Transformation Initiative has estimated that the biggest cost savings, in terms of network infrastructure and energy consumption, will be generated by moving operations from dedicated hardware resources to software running on commodity servers. Their estimate predicts an overall contribution to profit over the next 10 years of $200bn

[61]. In addition, they predict that network automation will reduce many of the operational expenses, generating an extra cumulative profit of $75bn. Additional cost savings are expected from enhanced security, which will reduce the costs related to data breaches, generating estimated profits of $80bn. In addition to cost savings, the network digitisation will provide new revenues streams. Research & Markets estimates that opening up the network APIs will allow third parties to provide improved services, generating annual revenues of more than $200bn by year 2022 [62]. In addition, in [63] the author estimates that new services to residential and enterprise customers, including future Internet of Things (IoT) applications, will generate global profits of the order of $300bn over the next decade.

Another means by which the digitisation of the telco industry will radically enhance the economy is through the improvement of Information Technology (IT) services across all types of businesses. A study carried out by the Harvard Business School [64], showed that other non-telco industries can reap substantial benefit by leveraging new data analytics to improve customer relationships, internal operations, product creation and delivery, and management of human resources.

In conclusion, the medium-to-long term expectations are indeed of substantial economic benefit across several industry sectors. However, it is to be expected that in the short term the transition between current models and next generation virtualised services will bring about some inefficiencies, due to duplication of network functions and expertise across the old and new platforms.

Vi Conclusions

This paper, extending the work in [65], has provided a description of recent trends on the virtualisation of telecommunications networks. After a brief introduction of the historical background that has led over the past two decades to the development of the SDN and NFV concepts, the paper has described ongoing work around the idea of cloud central office, providing a classification of the main frameworks and platforms currently under development. It has then provided a link to the still largely unexplored domain of optical network disaggregation, which can be considered a natural evolution towards a fully open network stack.

In section IV we have provided two examples of use cases for network virtualisation. The first, based on experimental testbed results, showed the possibility to use virtualisation to disaggregate a PON network so that multiple tenants and multiple services (e.g., from residential to C-RAN) can be accommodated over a shared infrastructure. The second showed the principle of using disaggregated optical transmission to cut across optical domain boundaries allowing for highly dynamic network reconfiguration to help meet the high capacity and low latency requirements of next generation services. We can also foresee similar scenarios in the future occurring for several users and applications, all with different priorities and requirements but competing for the same shared resources: the network will need to accommodate them as they are powered on and off arbitrarily.

Thus, depending on the time of day and other variables, both network functions and applications might need to be re-routed across the network, while trying to maintain their capacity and latency constraints. A full orchestration across wireless, optical and computing domains is thus required to maximise the number of services that can run successfully in the network. From an optical level perspective, this requires an agile layer capable of creating transparent connections across several ROADMs, capable of crossing DCs boundaries to provide minimum latency towards a given processing unit and capable of using the minimum optical bandwidth that is necessary to provide the link (i.e., using the most suitable modulation format and flexgrid bandwidth allocation).

Additional research is also required to provide bounded latency services for NFV flows within a cloud-CO. The trend of moving into software the services that were previously implemented in hardware does bring issue of latency bounds within a functional chain. If a cloud-CO based on common data centre architectures needs to disaggregate and virtualise hardware components, it will need to deliver bounded network and processing performance for some of the VNFs. In addition, upcoming applications linked to virtual and augmented reality will only exacerbate such low latency requirements. While some recent work has focused on NFV orchestration across domains and VNF placement optimisation [66], the research community still needs to develop frameworks to scale QoS tools to cope with several million flows, within an automated framework that spans multiple network domains.

Finally, as the cloud-CO becomes more and more integrated with access and metro optical networks, we envisage that dynamic, potentially disaggregated, optical networking will become part of the solution, providing low-latency links for high-capacity 5G and beyond mobile access over a multi-service, multi-tenant, statistically multiplexed network infrastructure.

Acknowledgments

This publication has emanated from research conducted with the financial support of Science Foundation Ireland (SFI) under Grant Numbers 14/IA/2527 (O’SHARE) and 13/RC/2077 (CONNECT).

References

  • [1] D. S. Alexander, W. A. Arbaugh, M. W. Hicks, P. Kakkar, A. D. Keromytis, J. T. Moore, C. A. Gunter, S. M. Nettles, and J. M. Smith. The SwitchWare active network architecture. IEEE Network, 12(3):29-36, 1998.
  • [2] S. da Silva, Y. Yemini, and D. Florissi. The NetScript active network system. IEEE Journal on Selected Areas in Communications, 19(3):538-551, 2001.
  • [3] L. Yang, R. Dantu, T. Anderson, and R. Gopal. Forwarding and Control Element Separation (ForCES) Framework. Internet Engineering Task Force, Apr. 2004. RFC 3746
  • [4] N. Feamster, H. Balakrishnan, J. Rexford, A. Shaikh, and K. van der Merwe. The case for separating routing from routers. In ACM SIGCOMM Workshop on Future Directions in Network Architecture, Portland, OR, Sept. 2004
  • [5] A. Farrel, J. Vasseur, and J. Ash. A Path Computation Element (PCE)-Based Architecture. Internet Engineering Task Force, Aug. 2006. RFC 4655
  • [6] G.J. Popek, and R.P. Goldberg. "Formal requirements for virtualizable third generation architectures." Communications of the ACM 17.7 (1974): 412-421.
  • [7] N. Feamster, J. Rexford and E. Zegura. The Road to SDN. ACM Networks, 11(12), Dec. 2013.
  • [8] N. McKeown, et al., OpenFlow: enabling innovation in campus networks. ACM SIGCOMM Computer Communication Review, 38(2), Feb. 2018.
  • [9] VMaare NSX white paper. VMware NSX Data Centre: Accelerating the Business. Available at: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/nsx/vmware-nsx-solution-brief.pdf
  • [10] ETSI white paper GS NFV 001, Network Function Virtualisation (NFV); Use Cases. October 2013.
  • [11] L. Peterson, A. Al-Shabibi, T. Anshutz, S. Baker, A. Bavier, S. Das, J. Hart, G. Palukar and W. Snow, Central office re-architected as a data center. Comms Mag. 54(10), Oct 2016.
  • [12] A. Kamadia and N. Chase, Understanding OPNFV: Accelerate NFV Transformation using OPNFV, Mirantis editor, 2017.
  • [13] Broadband Forum technical report TR-384, Cloud Central Office Reference Architectural Framework, Jan 2018, available at:https://www.broadband-forum.org/technical/download/TR-384.pdf.
  • [14] 3GPP Technical Report 28.801, Study on management and orchestration of network slicing for next generation network. Version 15.1.0, Jan. 2018.
  • [15] www.linuxfoundation.org
  • [16] https://telecominfraproject.com/open-optical-packet-transport/
  • [17] DPDK documentation, http://core.dpdk.org/doc/.
  • [18] DPDK Intel NIC Performance Report Release 18.02, May 2018. Available at https://fast.dpdk.org/doc/perf/DPDK_18_02_Intel_NIC_performance_report.pdf.
  • [19] P. Berde, M. Gerola, J. Hart, Y. Higuchi, M. Kobayashi, T. Koide, B. Lantz, B. O’Connor, P. Radoslavov, W. Snow, and G. Parulkar, ONOS: towards an open, distributed SDN OS. Proc. of the third ACM workshop on Hot topics in software defined networking, 2014.
  • [20] https://www.openstack.org/
  • [21] https://osm.etsi.org/
  • [22] ETSI, Network Functions Virtualisation (NFV); Management and Orchestration. GS NFV-MAN V1.1.1, Dec. 2014.
  • [23] ETSI, Network Functions Virtualisation (NFV) Performance & Portability Best Practises. GS NFV-PER V1.1.1, June 2014.
  • [24] https://onap.readthedocs.io/en/latest/
  • [25] H. Uzawa, H. Nomura, T. Shimada, D. Hisano, K. Miyamoto, Y. Nakayama, K. Takahashi, J. Terada and A. Otaka. Practical Mobile-DBA Scheme Considering Data Arrival Period for 5G Mobile Fronthaul with TDM-PON. Proc. of European Conference on Optical Communications (ECOC) ’17.
  • [26] S. Zhou, X. Liu, F. Effenberger and J. Chao. Mobile-PON: A high-efficiency low-latency mobile fronthaul based on functional split and TDM-PON with a unified scheduler. Th3A.3, OFC 2017.
  • [27] P. Alvarez, F. Slyne, C. Bluemm, J. M. Marquez-Barja, L. A. DaSilva, M. Ruffini, Experimental Demonstration of SDN-controlled Variable-rate Fronthaul for Converged LTE-over-PON. Proc. of Optical Fibre Communications conference (OFC), paper Th2A.49, March 2018
  • [28] A. Tzanakaki, M. P. Anastasopoulos and D. Simeonidou. Optical Networking Interconnecting Disaggregated Compute Resources: An enabler of the 5G Vision. Proc. of Optical Network Design and Modeling (ONDM) 2017.
  • [29] B. Cornaglia, G. Young, A. Marchetta. Fixed Access Network Sharing. Elsevier Optical Fibre Technology special issue on Next Generation Access, Vol. 26, part A, December 2015.
  • [30] X. Li, R. Casellas, G. Landi, A. de la Oliva, X. Costa-Perez, A. Garcia-Saavedra, T. Deiss, L. Cominardi, R. Vilalta, 5G-Crosshaul Network Slicing: Enabling Multi-Tenancy in Mobile Transport Networks , IEEE Communications Magazine, special issue on Network Slicing in 5G systems, August 2017, Vol. 55, No. 8, August 2017.
  • [31] M. A. Lema, A. Laya, T. Mahmoodi, M. Cuevas, J. Sachs, J. Markendahl and M Dohler. Business Case and Technology Analysis for 5G Low Latency Applications. IEEE Access, Vo. 5, pp. 5917 - 5935, Apr. 2017.
  • [32] Ovum white paper. Monetizing High-Performance, Low-Latency Networks. June 2017. Available at: https://ovum.informa.com/~/media/Informa-Shop-Window/TMT/Files/Whitepapers/White-Paper-Monetizing-HighPerformance-Low-Latency-Networks.pdf
  • [33] V. Lopez, J. M. Gran Josa, V. Uceda, F. Slyne, M. Ruffini, R. Vilalta, A. Mayoral, R. Munoz, R. Casellas, R. Martinez. End-to-end Service Orchestration From Access to Backbone. IEEE/OSA Journal of Communications and Networking, Vol. 9, No. 6, June 2017.
  • [34] Y. Yoshida, A. Maruta, K. Kitayama, M. Nishihara, T. Tanaka, T. Takahara, J. C. Rasmussen, N. Yoshikane, T. Tsuritani, I. Morita, S. Yan, Y. Shu, M. Channegowda, Y. Yan, B.R. Rofoee, E. Hugues-Salas, G. Saridis, G. Zervas, R. Nejabati, D. Simeonidou, R. Vilalta, R. Munoz, R. Casellas, R. Martinez, M. Svaluto, J. M. Fabrega, A. Aguado, V. Lopez, J. Marhuenda, O. Gonzalez de Dios, and J. P. Fernandez-Palacios. First international SDN-based Network Orchestration of Variable-capacity OPS over Programmable Flexi-grid EON. Proc. of Optical Fibre Communications conference (OFC), paper Th5A.2, 2014.
  • [35] Y. Li, W, Mo, S. Zhu, Y. Shen, J. Yu, P. Samadi, K. Bergman and D. C. Kilper. Transparent software-defined exchange (tSDX) with real-time OSNR-based impairment-aware wavelength path provisioning across multi-domain optical networks. Proc. of Optical Fibre Communications conference (OFC), paper Th5A.2, 2017.
  • [36] D. C. Kilper and Y. Li. Optical physical layer SDN: Enabling physical layer programmability through open control systems. Proc. of Optical Fibre Communications conference (OFC), paper W1H.3, 2017.
  • [37] M. Ruffini, Multi-Dimensional Convergence in Future 5G Networks. IEEE/OSA Journal of Lightwave technology, Vol. 35, No. 3, March 2017.
  • [38] A. Tzanakaki, M. Anastasopoulos, I. Berberana, D. Syrivelis, P. Flegkas, T. Korakis, D. Camps Mur, I. Demirkol, J. Gutierrez, E. Grass, Q. Wei, E. Pateromichelakis, N. Vucic, A. Fehske, M. Grieger, M. Eiselt, J. Bartelt, G. Fettweis, G. Lyberopoulos, E. Theodoropoulou and D. Simeonidou. Wireless-Optical Network Convergence: Enabling the 5G Architecture to Support Operational and End-User Services. IEEE Communications Magazine 55(10): 184-192 (2017).
  • [39] Next Generation Mobile Network (NGMN) alliance. NGMN Overview on 5G RAN Functional Decomposition. Feb., 2018. Available at: https://www.ngmn.org/fileadmin/ngmn/content/downloads/Technical/2018/180226_NGMN_RANFSX_D1_V20_Final.pdf
  • [40] G. Bennett. Open Line Systems and Open ROADM: How Open Is Your Line System? Presentation available at https://tnc18.geant.org/getfile/4520
  • [41] Metro Open ROADM Network Model V. 1.1. White paper, July 2016, source: http://www.openroadm.org/download.html.
  • [42] https://www.opennetworking.org/solutions/odtn/
  • [43] T. Szyrkowiec, A. Autenrieth, W. Kellerer. Optical Network Models and Their Application to Software-Defined Network Management. Hindawi International Journal of Optics, Sept. 2017.
  • [44] M. Bleanger, M, O’Sullivan and P, Littlewood. Margin requirement of disaggregating the DWDM transport system and its consequence on application economics. M1E.2, OFC ’18.
  • [45] T. Jimenez, J. C. Aguado, I. de Miguel, R. J. Duran, M. Angelou, N. Merayo, P. Fernandez, R. M. Lorenzo, I. Tomkos and E. J. Abril. A cognitive quality of transmission estimator for core optical networks. IEEE/OSA Journal of Lightwave Technology, vol. 31, no. 6, Jan. 2013.
  • [46] E. Seve, J. Pesic, C. Delezoide, and Y. Pointurier. Learning process for reducing uncertainties on network parameters and design margins. Optical Fiber Communications Conference (OFC) 2017. IEEE, Mar. 2017.
  • [47] L. Barletta, A. Giusti, C. Rottondi, and M. Tornatore. QoT estimation for unestablished lighpaths using machine learning. Optical Fiber Communications Conference (OFC) 2017, Mar. 2017.
  • [48] T. Panayiotou, S. Chatzis, and G. Ellinas. Performance analysis of a data-driven quality-of-transmission decision approach on a dynamic multicast-capable metro optical network. IEEE/OSA Journal of Optical Communications and Networking, vol. 9, no. 1, Jan. 2017.
  • [49] F. Musumeci, C. Rottondi, A. Nag, I. Macaluso, D. Zibar, M. Ruffini and M. Tornatore. A Survey on Application of Machine Learning Techniques in Optical Networks. Apr. 2018, Available as arXiv:1803.07976 at https://arxiv.org/abs/1803.07976.
  • [50] D. Rafique and L. Velasco. Machine Learning for Network Automation: Overview, Architecture and Applications. IEEE/OSA Journal of Optical Communications and Networking, vol. 10, no. 10, Oct. 2018.
  • [51] M. Kist, J. Rochol, L. A. DaSilva, C. Bonato Both. SDR Virtualization in Future Mobile Networks: Enabling Multi-Programmable Air-Interfaces. Proc. of IEEE International Conference on Communications (ICC), 2018.
  • [52] Y. Ou, M. Davis, A. Aguado, F. Meng, R. Nejabati and D. Simeonidou. Optical Network Virtualisation Using Multitechnology Monitoring and SDN-Enabled Optical Transceiver. IEEE/OSA Journal of Lightwave technology, Vol. 36, No. 10, MAY 2018.
  • [53] A. Elrasad, N. Afraz and M. Ruffini. Virtual Dynamic Bandwidth Allocation Enabling True PON Multi-Tenancy. Proc. of Optical Fibre Communications conference (OFC), paper M3I.3, March 2017.
  • [54] F. Slyne, A. Elrasad, C. Bluemm and M. Ruffini. Demonstration of Real Time VNF Implementation of OLT with Virtual DBA for Sliceable Multi-Tenant PONs. Tu3D.4, OFC 2018.
  • [55] BBF TR-402 technical report, "Functional Model for PON Abstraction Interface", October 2018.
  • [56] F. Slyne, J. Singh, R. Giller and M. Ruffini, Experimental Demonstration of DPDK Optimised VNF Implementation of Virtual DBA in a Multi-Tenant PON. ECOC 2018.
  • [57] M. Ruffini, M. Achouche, A. Arbelaez, R. Bonk, A. Di Giglio, N. J. Doran, M. Furdek, R. Jensen, J. Montalvo, N. Parsons, T. Pfeiffer, L. Quesada, C. Raack, H. Rohde, M. Schiano, G. Talli, P. Townsend, R. Wessaly, L. Wosinska, X. Yin and D.B. Payne. Access and metro network convergence for flexible end to end network design [invited]. IEEE/OSA Journal of Communications and Networking, Vol. 9, No. 6, June 2017
  • [58] N. Farrington, G. Porter, S. Radhakrishnan, H. H. Bazzaz, V. Subramanya, Y. Fainman, G. Papen, and A. Vahdat. Helios: a hybrid electrical/optical switch architecture for modular data centers. SIGCOMM Comput. Commun. Rev. 40, 4, Aug. 2010.
  • [59] P. N. Ji. Hybrid Optical-Electrical Data Center Networks. Proc. of OSA Photonic Networks and Devices, paper NeM3B.3, 2016.
  • [60] M. Ruffini, D. O’Mahony, L. Doyle. Optical IP Switching: a flow-based approach to distributed cross-layer provisioning.IEEE/OSA Journal of Optical Communications and Networking, Vol. 2, Issue 8, pp. 609-624, August 2010.
  • [61] World Economic Forum white paper. Digital Transformation Initiative Telecommunications Industry, Jan. 2017. Available at: http://reports.weforum.org/digital-transformation/wp-content/blogs.dir/94/mp/files/pages/files/white-paper-dti-2017-telecommunications.pdf
  • [62] Research & Market technical report. Telecom API Market Outlook and Forecasts 2017 - 2022, May 2017.
  • [63] M. J. Creaner. Transforming the Telco. Centernode Publishing, Jan. 2018.
  • [64] M. Iansiti and K. R. Lakhani. Digital Ubiquity: How Connections, Sensors, and Data Are Revolutionizing Business. Harvard Business Review, Nov. 2014.
  • [65] M. Ruffini, Moving the Network to the Cloud: Multi-Tenant and Multi-Service Cloud Central Office (Invited Tutorial). Proc. of European Conference on Optical Communications (ECOC) 2018, paper We4D.1.
  • [66] L. Askari, A. Hmaity, F. Musumeci and M. Tornatore. Virtual-Network-Function Placement For Dynamic Service Chaining In Metro-Area Networks. ONDM 2018.