ip has become the de facto standard for network level communications as a consequence of the Internet expansion. The growth of the Web lead to the adoption of http as the application level standard, therefore adopting tcp at the transport level. Multimedia and in general bandwidth hungry or delay constrained applications have historically employed datagram based communications to avoid the limitations imposed by reliable connections. Despite the inadequacy of tcp, the increase in available bandwidth and reduction in rtt have allowed its expansion. It is considered that a connection with twice available bandwidth in relation to the bandwidth needed can be entrusted to tcp without noticeable drawbacks.
As a mean to enhance http performance and take advantage of the geographical ip address range distribution, the cdn services appeared as caching services near the network edge, hence enhancing the perceived qos by replicating the content into network edges’ caches while ensuring content fairness. The scalability of the Internet, and in particular of multimedia and video content, is tightly attached to these kind of deployments.
Recently the fi has seen proposals that among others, try to break the ossification of ip networks and therefore of the Internet. Among those, the icn breaks with the geographical implication of the network address by leveraging routing on the content instead of the location so focusing on what and not on where. In general, these systems provide with in-network caching and are primarily represented by the ccn. Also as part of the fi, the sdn paradigm focuses on providing with a programmable network therefore facilitating the adoption of new protocols and architectures such as the icn. In addition, sdn changes the usual distributed network control, managed by network elements speaking through control protocols, for a centralized control system in which the network elements become simple executioners of the orders received via a control protocol, usually openflow.
On the other hand video streaming services based on well known rtp, rtsp and udp combination have been gradually cast out in favour of tcp based connections like the ones in rtmp and lately the http based plethora of video streaming protocols. The later have clearly become more and more popular these days partly because their simplicity and their capability to transparently take profit of all the enhancements made to http after twenty years. In particular, the http based video transmission techniques take profit of cdn and http’s capabilities to traverse proxies and firewalls. The http based video streaming is a killer application for a cdn alike system and therefore should be part of any proposal.
In general most of the fi approaches leverage on a clean-slate solution that make them difficult to be straightforward adopted. On  an icnaas architecture is presented migrating the well known and vastly adopted cdn concept to a higher level by offering an ’as a Service’ content centric way of defining new http caching systems by means of an sdn application, hence leveraging on sdn to steer the traffic to the destination based on the requested url. Unlike other proposals, the icnaas does not translate the http traffic to an intermediate representation but adopts a half-way approach between the clean-slate and the legacy by employing a proxy in order to inspect the url and inform the sdn controller about the intention to download a certain content. The caching systems employed in this solution are http based and therefore accept any existing software cache, appliance or even the already deployed cdn.
In this paper we offer an extension to the aforementioned system in which the centralized control of the network makes use not only of its topology knowledge but also of the metadata of the requested content to prefetch information from the source content provider, hence potentially increasing the system’s cache hit ratio.
The remainder of this paper is organised as follows. Section II introduces basic concepts needed to understand the rest of the publication as well as state of the art of what other researchers have proposed. Then Section III introduces the icnaas architecture and introduces its internals on top of which the prefetching proposal is designed and evaluated. Sections IV and V present the prefetching mechanism for svc on top of dash and the results of our proof of concept for such a mechanism respectively. Finally Section VI provides with conclusions and introduces future work lines.
Ii State of the Art
’A cdn is a collection of network elements arranged for more effective delivery of content to end-users’. cdn are usually geographically distributed caching systems that are offered to content providers and isp to move their contents closer to the final user, hence cdn are usually third parties.
cdn are traditionally implemented through a mix of techniques like HTTP redirection, DNS load distribution, anycast routing, and application-specific solutions, among others. As a result, a complex distributed system is in charge of redirecting users’ requests to clusters of network caches. The decision on which cache should receive the content request is usually based on the communication endpoints regardless the content being requested.
The cdn approach which emerged as an effect of Internet’s evolution is also constrained by the protocols on which it is based. As an alternative to the ’ossified’ ip end-to-end and geographically attached communication system, the icn paradigm advocates for delivering requested resources based on themselves and independently from the data transport . This can potentially increase the efficiency and scalability of content distribution, but it typically requires the deployment of state-of-the-art protocols like CCNx .
The icn approach places information pieces (content) as the central element of the network, making clients declare their “interest” on content pieces and providers to offer and deliver them to the intermediate network elements which, in turn, collaborate to deliver the requested content pieces to those clients, therefore advocating for what is known as in-network caching.
Among the vast bibliography related to icn two are the most related contributions to the work that is going to be introduced in the following Section III. Both studies employ dash as a mean for vod.
Authors in  introduce the ’Cache as a Service’ concept and based their OpenCache also in sdn for vod. In their proposal the control and decision of what content is to be cached is delegated to the isp noc that has the knowledge and ability to optimize network utilization and possibly save its precious up-link bandwidth to other isp. The authors also introduce the concept that given certain sla, the solution could be also exposed to content providers such as cdn. We actually agree with this view and even go further thinking that what should be offered to content providers is the instantiation of exclusive icn/cdn instances that can be customized and controlled by the providers and implemented in the isp premises, who could influence the behaviour of the hosted icn by means of the offered caching algorithms.
Similarly, authors in  evaluate the vod paradigm on fi architectures, in particular ccn and leveraging on sdn. The motivation for their proposal and the previous study is the collision of interests between client based rate adaptation in dash and the in network transparent content caching paradigm and how the later confuses the former. In the study made by the authors, they stated that even in a stable environment the video representation selected by the dash client was not stable, obtaining an oscillating pattern. They also conclude that dash chunk requests are spread among the available rates unlikely repeating them on the retrieval of the same video any time in the future.
In their evaluation of ccn for dash the authors introduce a proxy to translate the http requests to ccn interests and employ a sdn controller for network traffic. The findings are that ICN has two negative impacts on dash: reduced cache hit rate and imprecise rate estimation in the client due to the difference appreciated in channels to cache and to vod server. In their solution proposal, the authors rewrite the mpd offered to the client extending it with the information related to the cache. Similarly our proposal in and summarized in section III employs a proxy to feed the sdn controller and in particular the icnaas application with the url to be able to steer the traffic to the appropriate endpoint but without the burden of transport stream to datagram translation.
In a non-clean slate approach, authors in  propose an icn architecture with its focus on similar premises of those of icnaas, providing with a non-clean slate icn system in this case completely based on already existing and employed cdn technologies and techniques, the incrementally deployable icn or idICN. In this case the content must be explicitly registered into a proxy which in turn registers it in dns, in addition, a system to auto-configure (such as proxy auto-config PAC) the clients so that the border proxy is reached is needed. There is also an agreement with the authors on the importance on coordinating network traffic engineering, and in particular that of isp , with content engineering to fulfill the goals that each point of view have.
Authors in  highlighted some future directions that are directly related to the icnaas architecture: Service composition highly motivated by user preferences; Dynamic Content; and An adaptive CDN for media streaming.
Nowadays content delivery, with emphasis in video streaming, is the major source of bandwidth consumption in the Internet, so efficient and effective content distribution is a key aspect to deploy bandwidth demanding services at large scales. Cisco estimates that ’With the emergence of popular video-streaming services that deliver Internet video to the TV and other device endpoints, CDNs have prevailed as a dominant method to deliver such content. Globally, 70 percent of all Internet traffic will cross CDNs by 2021, up from 52 percent in 2016. Globally, 77 percent of all Internet video traffic will cross CDNs by 2021, up from 67 percent in 2016’.
dash is mpeg’s standardized approach to http based video streaming techniques, one of its more important characteristics is that it is codec agnostic which allows it to evolve embracing any codec evolution present and future, only the compatibility with the ISO Base Media File Format for media storage is mandatory. It reuses already existing and consolidated technologies, such as http and xml, to enable efficient and high-quality media delivery through networks. The idea behind dash is to create redundant metadata, which provides extra functionality with insignificant overload to the network architecture and service provider. Thus, it delegates to the client most of the complexity.
To ease the streaming process dash might split the media into small chunks of data which are indexed with a so called mpd file. A single mpd file is able to contain different representations for the same content with different characteristics. Therefore, a client can easily select or switch between different versions of the delivered (streamed) file, with different qualities like bit-rate or picture size.
That said, the main advantage of dash over other (non http based) existing streaming mechanisms is that both, the chunks and mpd files can be easily stored in already existing http caching infrastructures. In addition, dash will straightforward take advantage of almost any optimisation that could have been applied to the existing infrastructures such as the cdn. For this and other applications, the dash standard defines profiles and allows different modes (live, on-demand, and others) providing with interoperability and suitability for different services.
svc is the scalable extension to avc and among its characteristics, its layered architecture with dependencies between layers and the backward compatibility of its base layer are highlighted. Although svc was not widely adopted by the industry, there are movements to provide with new generation of scalable codecs such as shvc.
It is clear the interest on integrating dash video streaming in icn architecture such as ccn. We argue nevertheless that a end-to-end http icn capable architecture is needed and that can be easily and optimally implemented by employing sdn. Thanks to that system, mechanisms like the one presented in this paper for prefetching mechanisms aware of the type of content being retrieved can be implemented.
cdn are already coping with the challenge of scaling http based services as well as multimedia streaming services in a non-disruptive approach. Nevertheless, cdn rely on techniques which have been appearing and evolving as patches to initial design limitations, such as http redirection or dns load distribution, and the ossification of the ip environment. On the business perspective, cdn are usually provided by third parties or only available to big companies resourceful enough. The possibility to instantiate on-demand cdn alike mechanisms as a service by any content provider is a key characteristic to break the distance between smaller and bigger content providers, this approach could even allow with the means for content delivery optimization for internal use like intranets.
The Internet has become a reliable and mandatory service that stands on top of a patched and non-reliable mechanism. Although the temptation to clean-slate the system and produce new paradigms and architectures is very big, the solutions to today’s problems as well as the enhancement proposals, must be backward compatible and realistic.
The icn paradigm advocates for delivering requested resources based on their name and independently from the data transport .
On the other hand, in the last few years we have witnessed the rise of sdn and the high momentum that has gained . By means of a logically centralized controller that maintains the global view of the network and exposes a programmatic interface, sdn offers huge opportunities for network programmability, service automation, and simplified management.
With all of that in mind, we proposed an architecture to deploy icn as a service to be provided by sdn enabled networks while being completely backward compatible with the legacy http end-to-end approach.
In order to steer http traffic based on its url, the url itself needs to be known. From the tcp connection perspective, the url is not known until the 3-way tcp handshake has been accomplished. The tcp splicing  or delayed binding is a technique widely used and introduced by proxies to leverage on the kernel the rest of the communication once a milestone has been reached, reducing resource consumption. Similarly, in the icnaas the controller delegates on the proxy the initiation of the tcp connection and the inspection of the url inside the http request by redirecting any client connection to its nearest proxy. At that point the Controller is informed by the proxy (employing message 6 in Table I) and can select the cache and prepare the connection from the proxy to the cache for that precise request, the proxy is informed about the matching rules for that connection and then it can connect to the cache. The decision is done based on the icn instances running, as well as the url being requested.
To make the process transparent to both ends, ip and mac rewriting capabilities exposed by the sdn are employed so that the connection from the controller to whatever ip and tcp port are rewritten to cache’s ip, tcp port and mac, thus the cache’s operating system transparently accepts the incoming packets. The details about the design and solution are presented below.
The icnaas architecture is designed to be content independent with the only restriction set to use the http protocol which in turn was the main objective to be achieved. As introduced in previous section, video streaming is expected to be the highest bandwidth consuming content type in the near future and the adoption of http video streaming is a fact already, therefore, approaching the vod as first use case seems reasonable, moreover taking profit of the correlation between the mpd downloaded as a bootstrapping of the video streaming process and the highly probable correlation with the video chunks to be downloaded, allows the definition of advanced caching policies and advanced techniques like prefetching.
We consider that customized cdn creation is a service to be provided as part of isp services and inside their premises. Benefits for isp go from uplink bandwidth consumption reduction to third parties (cdn) and diversification of market by offering cdn like services themselves. As a consequence of adopting sdn, the cdn can be easily and dynamically rearranged, the provider himself could interact with the system. Also the caching mechanism can be modified on demand as well as the content-to-cache assignment algorithm.
The icnaas vision considers five networks that interconnect the system actors as shown in Figure 1. Apart from the typical SDN Control Plane and SDN Data Plane, as well as the SDN Management, we define two new networks, the ICN Control and the ICN Management. The former is intended for communication between the icnaas system, usually implemented as an sdn application, with the icn network elements in charge of offloading the sdn controller from tasks related with icn data transmission, while the later is employed by the Content Provider or the isp operator to communicate with the system and arrange the icn instances. This distinction comes from the functionality perspective and not from a real need of isolating such networks, the ICN and SDN Management networks are probably simply the Internet while the ICN Control and SDN Control Plane can probably be the same collision domain, but the actors involved and the type of communication being carried on in each network is also a differentiating factor.
The interactions between the Content Provider and the icnaas system to create, modify and remove proxies, caches and prefetchers (explained in Section IV) and the icn instance themselves are provided through the rest interface messages #1-5 detailed in Table I and are represented in Figure 2 (which shows the whole system working) with green arrows. The information necessary for each of these elements is the location, represented by the tuple (dpid,port) in openflow and the link and network level information, meaning mac, ip and tcp port. These values are needed while steering the traffic to a certain element to be rewritten in the paths created with openflow flows so that the operating systems running on the different elements of the network accept transparently the packets which otherwise were directed to other hosts of the network, therefore enabling the integration of any existing caching system in the architecture. Finally the Provider servers are to be registered as part of the icn, for these ones, standard ip routing is used and only the source network, the uri pattern and host pattern are supplied for filtering which requests go to which icn instance. Source network serves as client filter and could be from any host to a precise host including a whole network mast, the uri pattern filters the provider server to which it is making reference, take into account that any cdn or dns balancing mechanism existing outside of the sdn network is still valid since the routing is based on the uri and finally the host pattern, included in the icn instantiation, employs the Host http header to identify the requests related to this icn instance.
|1||ICN Management||Provider||icnaas||onos/icn/icn||name, description, type|
|2||ICN Management||Provider||icnaas||onos/icn/proxy||name, description, mac, ip, proxy_port, type,|
|location (dpid,port), isProactive|
|3||ICN Management||Provider||icnaas||onos/icn/prefetch||name, description, mac, ip, prefetch_port, type, location (dpid,port)|
|4||ICN Management||Provider||icnaas||onos/icn/cache||name, description, mac, ip, port, type, location (dpid,port)|
|5||ICN Management||Provider||icnaas||onos/icn/provider||instance, name, description, network, uripattern, hostpattern|
|6||ICN Control||Proxy||icnaas||onos/icn/proxyrequest||uri, hostname, smac, source ip, destination ip, protocol,|
|source port, destination port|
|7||ICN Control||icnaas||Prefetcher||/prefetch||uri, server, port|
Once proxies and delivery networks have been setup through the public Northbound API , the sdn application programs network devices to redirect http requests targeted at a content provider towards the closest proxy, the redirection can be performed reactively once a TCP_SYN message arrives to the controller or pro-actively. The proxy then uses the private Northbound API through what we denominated the ICN Control network, detailed message #6 in Table I, to notify the sdn application about the requested resource (the url, and the link, network and transport layer information). If such resource is not to be handled by any icn instance, the controller (depending on the noc policies) steers the traffic to the url through the default gateway or discards host traffic. Otherwise, the application programs a bidirectional flow from the proxy to the most appropriate cache that holds such resource. Note that the icnaas system is notified with the url together with source address and port identifying each session to retrieve each url independently. In case the resource is being requested for the first time, the application is responsible for choosing the most appropriate cache (according to the operator’s policy) and programming the associated flows. Since this provokes a cache miss, the content must be downloaded from the origin server but it will be available for future requests.
In order to implement name-based content placement and retrieval, the sdn application must inspect http flows originated from consumers and targeted at providers within a delivery network. However, the application cannot find out the resource uri until the tcp three-way handshake has finished. This is problematic because the application must direct the flow to the appropriate cache or origin server since the first tcp SYN segment. To overcome such issue, we have implemented a flexible http proxy that performs delayed binding (or tcp splicing) and provides our sdn application with the name of the requested resource, details on the message sequence are shown in Figure 2.
The sdn and in particular the openflow protocol capabilities to rewrite message headers are used. In the case of the communication between the client and the proxy, the proxy receives the TCP_SYN message with the destination mac, ip address and tcp port modified to those registered in the icnaas system for that precise proxy, in previous approaches the receiver (let it be the proxy or the cache) needed to apply transparent proxy techniques and linux iptables solutions to accept packets directed to a different mac, different ip and sometimes even different port. openflow upon version 1.1.0 defines some Set-Field type actions and in particular the optional OFPAT_SET_DL_DST for rewriting destination mac, OFPAT_SET_NW_DST for rewriting the destination ip address and the OFPAT_SET_TP_DST which are used for forward connectivity flows, similarly for the backward connectivity flows for responses the optional OFPAT_SET_DL_SRC, OFPAT_SET_NW_SRC and OFPAT_SET_TP_SRC are used to rewrite the source fields. Being optional means that it is not mandatory for a vendor to implement them and in some cases the actions are only available in software tables, meaning that they are not executed by the device hardware pipeline.
We envision the icnaas as a three layer system: the icnaas itself in charge of managing instances and steering traffic; a protocol specific layer aware of specifics about the service being provided with http such as dash or HTML; and finally the data layer with knowledge about the data itself, let it be svc video or a simple JPEG. The caching location decision is performed between the last two layers and might be configurable by the Provider.
The presented system serves as the basis for enabling content aware http prefetching systems.
Iv Prefetching url
Thanks to the url extraction mechanism and its provision to the icnaas and by employing the correlation between requests in http, the system can pro-actively request the content that is foreseen as to be requested as a consequence of the actual url. The simplest example is a web page in which the IMG tags point to pictures that will presumably be downloaded just after the web page itself. The idea is to enhance the cache hit ratio therefore reducing the download time. To that end, we introduce the role of ’cache accelerator’ or ’prefetcher’.
Some appliances might offer with the means to explicitly request for content caching, in that case the prefetcher might make use of that method and in some cases the icnaas might itself contact the cache directly. In that case, the cache should be also connected to ’ICN Control’ (’ICN Control’ is shown in Figure 1).
There is also the possibility to actually create fake HTTP requests that will trigger the caching mechanism transparently being vendor agnostic. The term fake here makes reference to the fact that the request is issued not by a customer but by the ICNaaS system itself predicting future requests and because requests are not performed fully but only the few first bytes are retrieved, avoiding unneeded network load. The cache accelerator could be perfectly implemented as part of the proxy reducing the number of trust relations of the ICNaaS application that sits on top of the ISP SDN controller that is a critical component of the network.
The introduction of the actual network state into the equation while steering the traffic to the caches has potential advantages such as the available bandwidth per link which for the case of video streaming might be used to force the users to a certain bit-rate version by dropping any request for higher bitrate versions.
As a consequence, the Prefetcher REST interface is defined as in Table I message #7, to enable the communication of the icnaas and the entity issuing the requests that will populate the cache prior to being requested. Note that in this case the icnaas acts as a client and not as the service. This functionality could be implemented as part of the controller by means of OFPT_PACKET_IN and OFPT_PACKET_OUT messages which would in turn imply at least 5 tcp messages (the 3-way handshake, the http request and the FIN). That approach would depend nevertheless on the caching entity behaviour when receiving the TCP FIN message, if it still continues downloading on the server side, it would be fruitful, if not, it would be a waste. On the other hand, this process would take these 5 messages per chunk which would rise linearly if there is dependency between chunks as is the case of svc layers on dash streams probably producing controller overload. The prefetcher, which could be collocated with the cache itself, avoiding network load, implements a full http heap which means that issues requests identical to those issued by clients so that any caching system would be able to be used.
In the case of dash video streaming the icnaas detects in the protocol specific layer that an mpd file has been requested. The system downloads the mpd file in parallel to the client request (note that the mpd data could also be supplied by the proxy but this keeps the proxy as simple as possible) and analyses it, storing the Representations url so that later each request can be identified with its Representation. When the client later requests for a certain url the matching is performed and the corresponding Representation url are notified to the prefetcher for download.
If the data specific layer detects a scalable video codec, such as svc, an extended matching is needed so that not only the Representation for that chunk is retrieved but also those on which the chunk layer depend.
We consider dash with svc a very interesting scenario for demonstrating the possibilities offered by metadata parsing for content prefetching, therefore we produced a proof of concept and evaluated it in the laboratory.
As a showcase of the possibilities offered by metadata parsing, we have implemented a rather simple svc aware distributed caching system. The solution distributes the svc layers over the caches available from the client to the network gateway uniformly, putting nearer to the client the lower scalability levels which have a higher probability to be requested. Thanks to the mpd parsing process, the icnaas knows exactly how many descriptions the video has and can assign each layer to a cache, depending on the number of registered caches. When a chunk is requested, the representation to which it pertains is matched knowing which other representations it depends on, with that information every possible url in the operation point is precomputed with a cache, so that next requests with related urls simply create the path to the corresponding cache. The algorithm implemented and evaluated in Section V is shown in Listing 1.
V Evaluating the Prefetching Mechanism
The architecture described in  and introduced in Section III has been extended with the prefetching mechanism described in Section IV and has been evaluated in two different testbeds, one with real switches and real wiring and one completely virtualized employing mininet version 2.2.2 with ovs version 2.9.2 for the network virtualization and lxc on top of a dual socket ’Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz’. This Section details the outcomes of the evaluation.
The icnaas with the prefetching mechanism implemented as one icn configurable alternative has been implemented on top of onos controller. The proxy and the prefetcher have been implemented with Python 3 based on the tornado library with a nginx proxy to allow request queueing. The caches are based on the well known Squid 3.5.12. For simplicity we deploy the prefetcher as a side entity of the proxy but it could be deployed anywhere into the network (even collocated with each cache).
The video employed for the evaluation is http://concert.itec.aau.at/SVCDataset/dataset/mpd-temp/BBB-I-1080p.mpd from the Institute of Information Technology, Alpen-Adria-Universität Klagenfurt, Austria which contains 50 different svc scalability levels and a duration of 10 minutes.
Our first evaluation has been carried out on an scenario with HPE ARUBA 2920 switches with software version WB.16.04.0008 configured to use OpenFlow 1.3, which in turn is the highest version available for these switches and the more stable version supported by onos. In this evaluation dependencyId 18 is requested by the dash client. Note that we deactivated the client side decision algorithm for the sake of focusing on the evaluation of the system and not in the effects of client decisions in caching systems as discussed above and introduced by Grandl et al.. In terms of cache hit ratio the prefetching mechanisms achieves in average (see Figure 4), the same scenario without prefetching mechanism achieves hit ratio on the second run, but it actually implies two full video streaming processes per run, 20 different runs were performed as base case. In average 2013 http requests are performed to the cache of which 722 are hits that for each layer has a ratio of , , and . In average the first hit is achieved 1,8 seconds after the mpd file is retrieved while the precaching process is finished in mean 4 minutes and 43 seconds afterwards while the length of the video being streamed is 10 minutes.
The results obtained by the prefetching mechanism are far from the ones obtained in the base case. One of the problems found in our research is that HP switches do not support IP address rewriting in their pipelines. As a consequence the IP rewrite of each packet is performed in the switch main cpu which is not intended for such a load. As can be seen in Figure 5 the switch’s cpu is overwhelmed and therefore packet loss may be involved in the results, even onos produces error logs regarding switch connectivity loss.
In order to discard the hardware related problems, we migrated our testbed to mininet and connected the same software instances employed in the hardware scenario to the network clone. For this testbed a series of 20 run per case have been performed, being the cases analysed the corresponding to non cached retrieval as a reference, the empty cache case in which all the requests produce a cache miss event in the cache, just after that a full cache case is performed, again with empty caches the prefetching case is performed and finally the prefetching with distributed svc cache allocation algorithm is evaluated. The results in terms of mean chunk download time are shown in Figure 6. As expected, the direct connection case outperforms the results of accessing an empty cache but in turn get outperformed by the results obtained by any of the cached approaches which was the expected result. 4 out of 20 runs produce a slightly poorer performance for the prefetcher and distributed svc with prefetching case. We consider this slight differences a consequence of the prefetcher and proxy implementation that has connection retry on an exponential fashion. This was needed to overcome the onos queuing approach to flow installation and could be avoided by ”simply” employing flow addition listeners and notifying the two entities once the flows have been finally installed on the devices. We followed a more aggressive approach by notifying the entities once the controller accepts the request and making the entities retry connection attempts.
As part of the client chunk download time evaluation, the time employed by the proxy to notify the controller and get the answer notifying that channel to the cache for that precise chunk has been created has been analysed and is shown in Table II. As can be seen the values are below 15 milliseconds which represents half of the time employed in accessing a cache with a HIT event.
As can be seen in Tables III, IV and V, the achieved hit ratio is with the software switches which are not limited by the hardware pipeline limitations more in-line with the expected values around 90% cache hit ratio. We have to take into account that most of the MISS events are cases in which the chunk had previously been requested but Squid has refused to cache it producing what in the logs is marked as ’TCP_SWAPFAIL_MISS’, it is not the aim of this paper evaluate Squid since it is just one of the caching possibilities available in the market.
Even though the migration to mininet has produced better results for the hit ratio, it is still far from the cache hit ratio. The controller plays an active role in our prefetching solution which forces to maintain information about the data sources such as the mpd parsed information. Another possible approach would be to delegate fully the prefetching mechanism to another entity which would inform the controller via REST API. This solution would also reduce the cpu consumption of the controller but would reduce the possibilities for future caching decision taking algorithms. The cpu and network load is shown in Figures 7 and 8 for three random runs of each scenario. As can be seen there is a cpu spike at the beginning of the prefetching enabled cases as a consequence of mpd analysis and calculation of caches. The network usage on the other hand stays with spikes below 60Mbps which includes openflow traffic as well as prefetcher and proxy signaling. Since the network load is not extended in long periods of time, roughly a few seconds, we don’t foresee it as a problem.
The prefetching with distributed svc cache allocation algorithm evaluation results have already been exposed in Table V in which two caches have been employed. As can be seen, C1 (standing for cache 1, the nearest from the client) receives more requests than C2. This is like that because the representation requested is 33 from 50 where dependencies for the latter are ”49 48 34 33 32 18 17 16 2 1 0”. Following the algorithm described in Listing 1 the Ids are distributed to C2 for those over 18 and those below 32 to C1, since the experiment is requesting Id 33, only 2 layers are stored in C2 and 6 layers are stored in C1.
Vi Conclusions and Future Work
We presented a system to implement icnaas with end-to-end http communication transparent to the end points involved and capable of integrating legacy servers and caching systems.
On top of that system, we have envisioned a way to predict which content is going to be retrieved thanks to the correlation usually present in http solutions and have integrated it into the icnaas system to finally evaluate it. The evaluation has shown the feasibility of the proposal, the possibilities of the prefetching mechanism thanks to metadata parsing and has show one specific instantiation with the Distributed SVC caching allocation mechanism opening a new field of research. Nevertheless, the high cpu usage has to be taken into account for future studies by probably off-loading the controller to a side entity.
Another field of research opened by metadata handling in the sdn controller is the security of the controller which, even if it is already an active field, now is extended by the possibility to attack the controller by malicious metadata files, such as a intentionally deployed mpd.
Two are the next steps in our research line. First is to investigate the caching distribution algorithms mentioned to provide alternatives for different goals, such as reduce the zapping time or save bandwidth between the leafs and the root of the network. Second is the inclusion of this icnaas in the MANO architecture and how the later could take care of the deployment of the proxies, prefetchers and if needed the caches and register them for the Provider.
This paper has been funded by the H2020 EU project ANASTACIA project, Grant Agreement N 731558 and by the GN4-2 project under Grant Agreement No. 731122.
-  (2014) A Survey of Software–Defined Networking: Past, Present, and Future of Programmable Networks,. in IEEE Communications surveys & tutorials 16 (3). Cited by: §I.
-  (2015) A SDN Based Method of TCP Connection Handover. pp. 13–19. External Links: Cited by: §III-A, §III-B.
-  (2008) Multimedia Streaming via TCP: An Analytic Performance Study.. ACM Transactions on Multimedia Computing, Communications & Applications 4 (2), pp. 16:1 – 16:22. External Links: Cited by: §I.
-  (2013) Information-Centric Networks: A New Paradigm for the Internet. External Links: Cited by: §I.
-  (2017) Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2016–2021 White Paper. External Links: Cited by: §II.
-  (2013) Dynamic adaptive streaming over HTTP (DASH) — Part 1: Media presentation description and segment formats. 2013, pp. 1–61. Cited by: §II.
-  (2013) Less Pain, Most of the Gain: Incrementally Deployable ICN. Proceedings of ACM SIGCOMM 43 (4), pp. 147. External Links: Cited by: §II.
-  (2014) Cache as a service: Leveraging SDN to efficiently and transparently support video-on-demand on the last mile. Proceedings - International Conference on Computer Communications and Networks, ICCCN. Note: opencache cdni External Links: Cited by: §II.
-  (2013) On the interaction of adaptive video streaming with content-centric networking. 2013 20th International Packet Video Workshop, PV 2013. External Links: Cited by: §II, §V.
-  (2012) Information-centric networking research group. Note: https://irtf.org/icnrg Cited by: §II, §III-A.
-  (2009) Networking named content. In Proceedings of the 5th International Conference on Emerging Networking Experiments and Technologies (CoNEXT ’09), New York, NY, USA, pp. 1–12. Cited by: §I, §II.
-  C. A. Long, A. Obi, and M. Frederick (Eds.) (2002) Load Balancing Servers, Firewalls, and Caches. Robert Ipsen. External Links: Cited by: §III-A, §III-B.
-  (2015) A scalable video coding dataset and toolchain for dynamic adaptive streaming over HTTP. Proceedings of the 6th ACM Multimedia Systems Conference on - MMSys ’15, pp. 213–218. External Links: Cited by: §V.
-  (2014) A survey of software-defined networking: past, present, and future of programmable networks. Communications Surveys & Tutorials, IEEE 16 (3), pp. 1617–1634. Cited by: §III-A.
-  (2018) 6. information-centric network for future internet video delivery.. In User-centric and Information-centric Networking and Services: Access Networks and Emerging Trends, External Links: Cited by: §I, §II, §III, §V.
-  (2006) A Taxonomy and Survey of Content Delivery Networks. Grid Computing and Distributed Systems GRIDS Laboratory University of Melbourne Parkville Australia 148, pp. 1–44. External Links: Cited by: §II, §II.
-  (2011) OpenFlow Switch Specification 1.1.0. Current 0, pp. 1–36. External Links: Cited by: §I, §III-B.