Understanding Infrastructure Edge Computing. Alex Marcham
Читать онлайн книгу.as possible. Serving the traffic in this context refers to the ability of these resources to respond satisfactorily to the needs of the traffic and not have to send that traffic to a destination that is off of the infrastructure edge computing network, such as a regional data centre via backhaul.
Network transport is the ability of a network to move data from one endpoint to another, such as from its source to its destination. This is the core functionality of all communications networks. It does not matter what scale the network is operating at in terms of geographical coverage or the number of endpoints; ultimately it is required to provide transport from one endpoint to another using however many endpoints or links between endpoints are required to achieve this single goal.
Network transit is the ability of a network to function as a bridge between two other networks. In the context of infrastructure edge computing, consider the diagram in Figure 3.6, which shows how the infrastructure edge computing network can provide transit services between an access network on the left and a backhaul network on the right. In this example, the infrastructure edge computing network is not serving any of the traffic which comes in from the access network and is instead simply passing it through to another destination, which is accessible via its own backhaul network.
Figure 3.6 Infrastructure edge computing network providing transit services.
A typical infrastructure edge computing network will aim to minimise the amount of network transit it provides; although it is essential that the infrastructure edge computing network is able to provide transit for traffic that the network itself is unable to serve, this capability should not be seen as the main use of the infrastructure edge computing network. Although it is beneficial to the operator of the network if as great a proportion of the access layer traffic flows through the infrastructure edge computing network as possible, because this provides the greatest opportunity to serve traffic using resources at the infrastructure edge, if the bulk of this traffic cannot be served by the infrastructure edge due to the tenants present or other factors, the network is just joining access back to backhaul. This does not utilise the full capability of the infrastructure edge computing network for applications.
Of course, the physical data centre and network infrastructure of the infrastructure edge computing network is not enough alone to satisfy traffic; first, the right networks must be interconnected at the infrastructure edge to enable traffic to be exchanged efficiently without transporting it all the way to the IX and back, and second, the resources that an endpoint is trying to access must also be located at the infrastructure edge. These resources may include streaming video services, cloud instances, or any other network accessible resources, including new use cases such as IoT command and control.
Data Centre Interconnect (DCI) typically refers to the physical network infrastructure that is used to connect one data centre to another, regardless of the scale of the two facilities, combined with a protocol set used to facilitate inter‐data centre communication. This connectivity between facilities may be used to provide both transport and transit services; for example, in the context of several infrastructure edge data centres deployed within a single area such as a city, where a resource is not available in one data centre, it may be in another where that data centre is connected to directly or indirectly. In this case, the traffic can be sent to that serving data centre and can be served while still remaining on the same infrastructure edge computing network, providing some latency advantages.
Although in the ideal case traffic is served by the first infrastructure edge data centre that it enters, as long as the connectivity between infrastructure edge data centres is sufficient to provide a lower latency and cost of data transportation than sending the traffic back to another destination over a backhaul network, this process can still provide a better user experience than is otherwise possible.
Physically, the network connectivity between data centres regardless of the scale of these facilities is typically implemented using high‐capacity fibre optic networks. These networks provide far greater capacity than any other currently used transmission medium and economically are capable of the lowest cost per bit of transmitted data by far when compared to alternative technologies such as copper or wireless networks. As data centre facilities are not physically moving, they do not require the mobility advantages of wireless technologies, and fibre exceeds the capacity possible in copper.
Additionally, many entities ranging from telecommunications network operators to municipalities have gone to considerable expense to lay fibre optic cabling throughout many urban and even some rural areas. Locations where this fibre happens to aggregate, such as at tower sites used for cellular networks, make ideal locations for infrastructure edge data centre deployments due to the ability to access existing fibre networks and minimise the expense of deploying the infrastructure edge itself.
3.12 Serve Transit Fail (STF) Metric
The ability of the infrastructure edge computing network to serve traffic is a key measure of its value, and so high targets, such as serving 75% of all traffic received from the access network, are common. Although these targets may seem high, the key to achieving them is defining a realistic scope and an understanding of real user needs; once the underlying infrastructure is established, this is typically a task for the tenants of the infrastructure edge computing network such as content providers. These entities must then optimise the software resources such as application instances or pieces of specific content present on the infrastructure edge network in order to maximise all the achievable benefits.
Three possibilities exist when an infrastructure edge computing network receives traffic. It can serve the traffic; it can transit the traffic to a destination which it believes can serve it; or it can drop the traffic, which is a failure. The latter may occur where the infrastructure edge computing network has no route to any destination which it believes will be able to serve the traffic, and so the traffic is just dropped instead. This is of course suboptimal and to be avoided wherever possible by all parties. In an ideal scenario, the infrastructure edge computing network performs measurably better than its centralised counterpart in terms of lower latency and the reduced cost of data transportation and introduces as few cases of lower performance as possible; where traffic cannot be served by the infrastructure edge network itself, the network should not significantly decrease performance by transiting that data to a destination such as to a regional data centre compared to the alternatives.
There is a simple metric used to calculate the value of an infrastructure edge computing network in the form of serve transit fail (STF). It can be calculated as follows over a given period of operations:
1 Compare statistics from the ingress (traffic that is entering the infrastructure edge network from any of its interconnected access networks) and egress (traffic exiting the infrastructure edge network that is transiting from those same access networks) data flows collected over a period of time such as a week or longer to provide a realistic view of network operations.
2 Calculate the proportion of ingress to egress traffic as described previously. In this example, we will use the following figures for this step, which have been chosen just for their simplicity. We can see that with these figures, 90% of the ingress traffic was either served or failed at the infrastructure edge network, and the remaining 10% was transited to another destination.Ingress traffic: 100 GBEgress traffic: 10 GB
3 At this stage, our infrastructure edge computing network has a very high STF of 0.90. This is encouraging, although we do not have the full picture yet. Next, we must then subtract the impact of any failures to deliver traffic from this metric. Because in this example we are measuring traffic only in terms of size and not in terms of individual session requests, we must be able to coarsely estimate the impact of a failure by the amount of traffic that, if the failure had not occurred, would have been served in response to the failure either on or off edge.For this example, we will