Understanding Infrastructure Edge Computing. Alex Marcham
Читать онлайн книгу.transportation. This topic will be explored in a later chapter in this book, but its existence is a useful primer for the reader at this point as well.
Many agreements to exchange data between networks are reciprocal. One network may agree to interconnect with another because each network would bring a roughly equivalent benefit to the other, in terms of accessible endpoints and resources as well as establishing a balance in terms of the volume of traffic that each sends and receives with the other network, creating an equilibrium.
However, this understanding has been tested in recent years by the widespread use of streaming video services and other networks which send orders of magnitude more traffic than they receive. These services can place significant strain on the networks which they interconnect with due to the sheer bandwidth use generated by the widespread adoption of these services. This has prompted some network operators to move such relationships away from traditional reciprocal agreements.
Other agreements are considered “pay to play” arrangements, where a network may agree to interconnect with another network only if that other network agrees to pay for them to do so. Future payments may then be organised on a usage basis or on some other means as agreed between the parties involved. Although unpopular with some, the idea of paying for interconnection is a viable option where one party requires an additional incentive to interconnect with another. Alternatively the interconnection may not occur, reducing the overall ability of the internet to grow over time.
On‐ramps are an interesting part of network interconnection and exchange which have become more prominent over the past decade with the rise of cloud computing. Although there is some variation in how the term is used, an on‐ramp generally refers to a dedicated piece of network infrastructure which gives one or more parties direct access to the network of a cloud provider. Examples of these on‐ramp services include Amazon Web Services (AWS) Direct Connect [3] and Microsoft Azure ExpressRoute [4]. As more applications operate from cloud instances, such services become increasingly important to minimise latency and the cost of data transportation long term.
3.9 Fronthaul, Backhaul, and Midhaul
Much like the term edge itself, fronthaul, backhaul, and midhaul are all contextual terms. Whether a particular segment of network connectivity fits into any of these three categories depends primarily on the context in which it is observed, specifically by the locations where the network connectivity begins and where it ends combined with the point of view of the person using the terms. Whenever these terms are used, it is worthwhile clarifying the context of the speaker so that the topology that is being described can be fully understood to help to minimise the chance of any resulting confusion.
In the context of infrastructure edge computing, we will use the infrastructure edge data centre as the starting point for our network connectivity when using these terms. This means that when we refer to fronthaul connectivity in this context, it is network connectivity between an infrastructure edge data centre and a piece of network infrastructure such as a telecoms tower or cable headend.
Midhaul in this context refers to the network infrastructure that is used to connect infrastructure edge data centres to one another across an area such as a city. This network is often an example of a MAN, as it connects network endpoints together across an area which is typically the size of a city. Building upon our use of LAN, MAN, and WAN in an earlier section, each infrastructure edge data centre can be considered a LAN in itself, and so the midhaul network infrastructure often does fit our description of a MAN as a network connecting many LANs distributed across a specific area.
Backhaul, when used in the context of infrastructure edge computing, refers to the range of network infrastructure which is used to connect an infrastructure edge data centre back to a piece of regional network or data centre infrastructure. An example of this is a WAN link, which is used to connect one or more infrastructure edge data centres to an IX in a neighbouring city. As such, this connectivity is typically a WAN although, depending on the distance required, it may alternatively be called a MAN.
The way in which we will use these terms throughout this book can be seen in the diagram in Figure 3.4. Terms denoting geographical network scale such as LAN, MAN, and WAN are overlaid as appropriate:
Figure 3.4 Fronthaul, backhaul, and midhaul networks.
3.10 Last Mile or Access Networks
When defining infrastructure edge computing in a previous chapter, the term last mile networks was used to denote the dividing line between the locations of the device edge and infrastructure edge to allow us to separate these two very different domains from one another. Last mile networks are key because they represent, for the majority of users and endpoints, the way that they connect to any of the network‐accessible resources that they seek to use. That is why these networks are also referred to as the access layer, when we consider the internet as a network of networks; there must be a first layer of network infrastructure which the endpoint connects to. The connectivity between that first layer then shapes much of the performance and cost that will be achievable in the network overall.
Recalling our discussion in a previous section on network interconnection, we can see that last mile network interconnection is especially critical to fulfil the promises of infrastructure edge computing. Where one endpoint is connected to one network and the endpoint or resource they want to access is connected to another, even if that endpoint is located in a nearby infrastructure edge data centre, if interconnection between these networks must take place a significant distance away in an IX, then the lower latency and cost of data transportation offered by the infrastructure edge data centre will not provide any benefit, as all traffic will move through the IX before it reaches the edge data centre.
The diagram in Figure 3.5 illustrates this challenge. Where access networks are not interconnected at the infrastructure edge, the benefits of the infrastructure edge data centre then cannot be achieved:
A last mile network does not necessarily have to be a publicly accessible LTE network, cable network, or a similar entity; it may be a dedicated fibre connection for a single large entity such as a hospital to provide that entity with direct connectivity to resources which are of significant interest on the infrastructure edge. This network connectivity could be referred to, playing on the terminology used for the cloud on‐ramp services briefly described in a previous section, as an edge on‐ramp of sorts.
Figure 3.5 Last mile or access network interconnection failure.
Infrastructure edge computing does not prescribe a specific type or scale of access network, and the ideal infrastructure edge computing deployment will be able to support multiple concurrent access networks of scales varying from a single intended user through to hundreds of thousands. How each type of access network can interconnect at the infrastructure edge data centre will be explored in a later chapter, but for the purposes of this section, just remember that the more networks the better.
3.11 Network Transport and Transit
Ideally, an infrastructure edge computing network, which we will define here as the combination of infrastructure edge computing data centres combined with their supporting network infrastructure within a specific area such as a city, will serve as much of the traffic entering it from interconnected access