Understanding Infrastructure Edge Computing. Alex Marcham
Читать онлайн книгу.be required which is not possible or practical to support using the local resources of the device, then the data centre must be physically located close enough to the device for the network connectivity between them to provide acceptable latency so that the user experience will not be degraded and the application will be able to function as intended. This is often challenging as a user may be many hundreds or thousands of miles away from the data centre, which is supporting their application, exacerbating these issues.
Figure 2.4 Application with access to remote data centre resources.
Finally, let’s examine what this same use case looks like with the introduction of infrastructure edge computing. A single IEDC has been added to our previous topology, with its location being in between the user’s device and the RNDC. In addition the IEDC is interconnected with the last mile network which the device is connected to, and is connected back to the RNDC. These two elements are crucial to ensure optimal network connectivity, and they will be explored further in the next chapter.
In this case, the application has access to three sets of resources in increasing degrees of the total potential resources available: the device itself, the IEDC, and the RNDC. As can be seen in Figure 2.5, these resources are physically located in a gradient from the device in the user’s hand to a national data centre which may be thousands of miles away. The IEDC is optimally located no more than 15 miles away from the user to minimise latency while still being able to support the dense resources that are required by the application; in this way, the IEDC is able to support the needs of the application in the same way as an RNDC but from a physical location that is much closer to the end user. This blend of characteristics shows the power of the optimal infrastructure edge computing deployment, where an edge data centre can provide a low latency comparable to the device itself, with the back‐end muscle of the larger scale data centre.
Although the IEDC is physically a fraction of the size of the RNDC, its resources are capable of providing similar capabilities for the users that are within its area of operations. This is a balance which is achieved by deploying many IEDCs in a given area, such as across a city, and determining the user population that surrounds each one of those facilities; this can be achieved by drawing a 15‐mile radius around each facility to maintain low latency. Should additional resources be required over time, additional IEDCs can be deployed, and the user population is then segmented again to prevent individual data centres from becoming heavily congested. This deployment and operation methodology allows infrastructure edge computing to scale over time beyond an initial deployment.
Figure 2.5 Application with access to infrastructure edge computing resources.
In many cases, the ideal set of resources does not exist in only one of these three locations. To make the best use of this gradient of resources from device to national data centre, an application and its operator should seek to optimise which functions are performed using which set of resources and take into account the individual characteristics of each of these sets. This is a complex issue which will be explored further in this book; do not worry too much about the minutiae of this right now.
As can be seen from this example, just as the use of the RNDC expanded the capabilities of applications which previously could rely on the resources available to them only on a user’s device, the IEDC adds an additional layer of resources which augments the capabilities of both the device and the RNDC. This gradient of resources which spans from the device in a user’s hands to an IEDC all the way to a national data centre which may be thousands of miles away, is the foundation of the next‐generation internet, enabling new valuable classes of applications and use cases to be practical.
2.7 Summary
This chapter formed the basis of an introduction to edge computing, describing the key terminology and many of the core concepts which are driving the design, deployment, and operation specifically of infrastructure edge computing but also with coverage of device edge computing. The terminology and concepts described in this chapter will be used frequently throughout the rest of this book, so it may be useful to refer back to the key points of this chapter at a later date to refresh your memory.
In the next chapter, we will explore the foundations of network technology to give full context to the impact of infrastructure edge computing on these concepts and to then establish a clear baseline on which to build our understanding of how tomorrow’s networks will differ from those we see today.
References
1 1 The Linux Foundation. Open glossary of edge computing [Internet]. 2019 [cited 2020 Sep 30]. Available from: https://www.lfedge.org/openglossary
2 2 The Linux Foundation. State of the edge [Internet]. 2019 [cited 2020 Sep 30]. Available from: https://www.stateoftheedge.com
3 3 The Linux Foundation. LF Edge [Internet]. 2020 [cited 2020 Sep 30]. Available from: https://www.lfedge.org
4 4 DARPA (Defense Advanced Research Projects Agency). ARPANET [Internet]. 2020 [cited 2020 Sep 30]. Available from: https://www.darpa.mil/about‐us/timeline/arpanet
5 5 Akamai Technologies. Company history [Internet]. 2020 [cited 2020 Sep 30]. Available from: https://www.akamai.com/us/en/about/company‐history.jsp
3 Introduction to Network Technology
3.1 Overview
To gain a fully contextualised understanding of the impact of infrastructure edge computing on our internet infrastructure, we must have a common understanding of the design and operation of the networks which are in use today from the grand scope of their overall architectural principles to the underlying protocols which make them work. This chapter will explore modern network design and operation, with the aim of establishing the reader with an understanding of the most relevant parts of the topic, which will be used throughout the rest of this book as further concepts are introduced.
3.2 Structure of the Internet
Although the internet may appear to be one single amorphous entity, this is not the case at all. The internet is a network of networks – a complex system of protocols, physical infrastructure, and many layers of agreements between network operators to work together for the mutual benefit of each party involved. A thorough analysis of every aspect of the structure of the internet is outside of the scope of this book, but a progression through the major stages of the parts of internet infrastructure which are most relevant to infrastructure edge computing and its main driving factors is warranted.
Although many of the major stages in the evolution of the internet have been described briefly in the previous chapter, the following sections will describe the implications of these changes for the design of the networks which, joined together, make up the internet as we know it in greater detail.
During this chapter, the term network endpoint, or endpoint, will be introduced. It refers to any entity on the network that is capable of sending and receiving data which