Understanding Infrastructure Edge Computing. Alex Marcham
Читать онлайн книгу.users and applications. More regionalisation of internet infrastructure was required to address these challenges, and perhaps the most influential method of achieving this was positioning static content in caches which are placed strategically throughout the network, creating a shorter path between traffic source and destination.
2.4.3 CDNs and Early Examples
One of the best examples of network regionalisation to solve a specific use case as well as address the needs of network operators is the content delivery network (CDN) work done by Akamai Technologies in the late 1990s [5]. Although compared to today the internet and the world wide web it supports were still in their infancy, with both having gained mainstream acceptance only a few years previously, need for the regionalisation of key infrastructure was already beginning to show as the internet became known for distributing new multimedia content, such as images and early examples of hosted video, which began to strain its underlying networks. If left unaddressed, this strain would have limited the uptake of online services by both businesses and home users and ultimately prevented the adoption of the internet as the go‐to location for businesses, essential services, shopping, and entertainment.
The importance of CDNs and of the practical proof point of the benefits of network regionalisation which they represent cannot be understated. By deploying a large number of distributed content caching nodes throughout the internet, CDNs have drastically reduced the level of centralised load placed on internet infrastructure on a regional, national, and global scale. Today, they are a fact of life for network operators; these static caches are widely deployed in many thousands of instances from a variety of providers such as CacheFly, Cloudflare, and Akamai, who reach agreements with network operators for their deployment and operation within both wired and wireless networks which provide last mile network connectivity. This regionalisation of static content, by moving the CDN nodes to locations closer to their end users, improves the user experience and saves network operators significant sums in the backhaul network capacity which would otherwise be needed to serve the demand for the content were it located farther away in an RNDC.
Where infrastructure edge computing diverges from the historical CDN deployment model is in its ability to support a range of use cases which rely on dense compute resources to operate, such as clusters of central processing units (CPUs), graphics processing units (GPUs), or other resources which enable infrastructure edge computing to provide services beyond the distribution of static content. Many CDN deployments do not require significant compute density, nor are many of the existing telecommunications sites where they are deployed (such as shelters at the bases of cellular towers, cable headend locations, or central office locations) which were originally designed to support low‐density network switching equipment capable of supporting the difficult cooling and power delivery requirements which these dense resources impose. Additionally, in many cases infrastructure edge computing deployments bring additional network infrastructure to provide optimal paths for data transit between last mile networks and edge data centre locations and between edge data centres and RNDCs; typical CDN nodes in contrast will usually be deployed atop existing network operator infrastructure at aggregation points such as cable network headends.
It is worth mentioning here, however, that infrastructure edge computing and the CDN are not at all mutually exclusively concepts. Just as a CDN can operate from various locations across the network today by the deployment of server infrastructure in locations such as cable network headends, they are also able to operate from an IEDC. One or multiple CDNs are then able to use infrastructure edge computing facilities as deployment locations for CDN nodes to replace or augment their existing deployments which use the current infrastructure of the network operator.
Although CDNs in many ways pioneered the deployment methodology of placing numerous content caches throughout the internet to shorten the path between the source and destination of traffic, it is important to understand the distinction between a deployment methodology and a use case. The CDN is a use case which needed a deployment methodology that achieved network regionalisation in order to function. As infrastructure edge computing is deployed, CDNs can also be operated from these locations as well. This is an important point that will be revisited later on the subject of the cloud.
2.5 Why Edge Computing?
Now that we have established the terminology and some of the history behind the concept of edge computing, we can delve deeper into the specific factors which make this technology appealing for a wide range of use cases and users. We will return to many of these factors throughout this book, but this section will establish these factors and the basic reasoning behind their importance at the edge.
2.5.1 Latency
The time required for a single bit, packet, or frame of data to be successfully transmitted between its source and destination can be measured in extreme detail by a variety of mechanisms. Between the ports on a single Ethernet switch, nanosecond scale latencies can be achieved, though they are more frequently measured in microseconds. Between devices, microsecond or millisecond scale latencies are observed, and across a large‐scale WAN, such as an access or last mile access network, hundreds of milliseconds of latency are commonly experienced, especially when the traffic destination is in a remote location relative to the source of the data, as is the case when a user located on the device edge seeks to use an application being hosted in a remote centralised data centre facility.
Latency is typically considered to be the primary performance benefit which edge computing and particularly infrastructure edge computing can provide to its end users, although other performance advantages exist such as the ability to avoid current hotspots of network congestion by reducing the length of the network path between a user and the data centre running their application of choice.
Beyond a certain point of acceptability, where the required data rate is provided by the network to the application for it to function as intended, increasing the bandwidth and therefore the maximum data rate that is provided to a user or application on the network for a real‐time use case does not measurably increase their quality of experience (QoE). The primary drivers of increased user QoE are then latency, measured at its maximum, minimum, and average over a period of time, and the ability of the system to provide as close to deterministic performance as possible by avoiding congestion.
The physical distance between a user and the data centre providing their application or service is not the only factor which influences latency from the network perspective. The network topology that exists between the end user and the data centre is also of significant concern; to achieve the lowest latency, as direct a connection as possible is preferable rather than relying on many circuitous routes which introduces additional delay in data transport. In extreme cases, data may be sent away from its intended destination before taking a hairpin turn back on a return path to get there. This is referred to as a traffic trombone, with the path which the data takes resembling the shape of the instrument.
2.5.2 Data Gravity
Data gravity refers to the challenge of moving large amounts of data. To move data from where it was collected or generated to a location where it can be processed or stored requires energy which can be expressed both in terms of network and processing resources as well as financial cost, which can be prohibitive when dealing with a large amount of data that has real‐time processing needs.
Additionally, many individual pieces of data that are collected or generated can, once processed, be considered noise as they do not significantly contribute to the insight which can be generated by the analysis of the data. Before processing occurs, however, it is difficult to know which pieces of data can be discarded as insignificant, and an individual device may not have all of the contextual information or the analytic processing power available to accurately make this judgement. This makes the use of infrastructure edge computing key as this processing can occur comparatively close to the source of the data before the resulting insight is sent back to a regional data centre for long‐term data storage.
2.5.3 Data Velocity
Many