Understanding Infrastructure Edge Computing. Alex Marcham
Читать онлайн книгу.the network. It is a generic term that encompasses any scale, capability, or role that an endpoint may have, from the tiniest embedded sensor for internet of things (IoT) to a room‐sized supercomputer, as long as that entity can perform those functions, regardless of the speed at which it does so or its other uses.
3.2.1 1970s
Although telephone or telegram networks, as well as other attempts at computer networks, could be considered precursors to the modern internet, for the purposes of this section we will choose to use the Advanced Research Projects Agency Network (ARPANET) as designed at its inception in 1969 as our starting point. This focuses the discussion in this book on the parts of the internet which are deployed in the United States; however, this pattern has been observed across the internet infrastructure that exists in many other countries worldwide.
The ARPANET was an advanced design and implementation of cutting‐edge network technologies that laid the foundation for the internet of today and tomorrow, despite how quaint it may look to some in the present day. It remains an excellent example of how even though over time the specific technologies used to achieve an aim may change as progress is made in their individual areas, solid design principles can still be used to ensure that any changes are in service of the original intention.
3.2.2 1990s
Throughout the 1990s, internet usage rapidly accelerated across a variety of dimensions. Not only did the number of internet users grow quickly, but so did the amount of data that they each sent and received due to a slew of new use cases and services which were accessible online. This pushed internet infrastructure to evolve across each of those same dimensions; coverage expanded, speeds increased across each part of the internet’s constituent networks, and regionalisation accelerated.
As described in the previous chapter, the key theme that we can see developing between each of these stages in the architectural evolution of the internet is increasing regionalisation. Both network and server or data centre infrastructure have at this stage begun to push out closer to the locations of their end users, through a combination of expanding existing areas of internet service availability and the addition of new areas over time to capture the growing demand for new key online services.
3.2.3 2010s
The 2010s saw the widespread adoption of two significant use cases for internet infrastructure: cloud computing and streaming video services. Both of these have proved instrumental in how we design large‐scale networks in the years since, driving both heavy, highly asymmetric use of downlink network bandwidth during the evenings as people turn to internet‐provided alternatives to cable TV services for entertainment and large uploads of data for both transactional purposes as well as long‐term storage during the day as more business applications shift from on‐premises to cloud services.
Originally deployed during the 1990s, content delivery networks (CDNs) fully came to the fore during this period as a means to achieve several important aims. Moving stores of content closer to their intended users brought a number of key benefits to users, network operators, and content providers, ranging from the ability to provide a better user experience, reducing the growing strain on backhaul and midhaul network infrastructure, and helping to address concerns from network operators that content providers who send much more traffic than they receive were upsetting the established balance of interconnection.
3.2.4 2020s
During the 2020s, the trend of increasing network regionalisation will continue, enabled by the use of infrastructure edge computing. This operational and deployment methodology for moving small data centres and their associated network infrastructure out to increasingly local locations, often 15 miles or less from their end users, augments all of the other regionalisation methods employed from the 1990s through 2020. This methodology and set of technologies results in a densification of the network and data centre resources at the access layer of the network, the closest to their end users.
Infrastructure edge computing enables the architecture of the internet to progress from its origins in the ARPANET of a handful of comparatively centralised locations to a highly distributed architecture that pushes network and data centre infrastructure out into urban and rural areas, building on what began as a network with four initial hosts back in 1969 into a regionalised and densified internet that brings the capabilities of the data centre in terms of application operation, data storage, and network interconnection to potentially thousands of micro data centre locations across even a single country.
3.2.5 Change over Time
It may seem easy to look back at the design decisions made in previous architectural generations of the internet and scoff: If the benefits of network regionalisation are clear and the first steps along this path had already been taken, then why not build it out in this way from the beginning? Like all choices made during system design, there are many trade‐offs which govern whether it is feasible both technically and economically to deploy a specific level or type of infrastructure at a given time. The choices made during a particular decade as highlighted previously must be appreciated within the time and context they were made in, without judging them by what we now know in the present.
Although these changes over time to the architecture of the internet in response to the needs of both its users and its operators are remarkable, it is important to note the level of difficulty that is inherent in making any change to a complex network system. The next section describes one of the methods used by the global network engineering community to minimise the impact of any changes on other parts of the system so that changes can often be made as and when they are ready, with no need to concurrently change other links or endpoints in the network to ensure correct operation.
3.3 The OSI Model
Any detailed discussion of network technology would be incomplete without a shared understanding of the Open Systems Interconnection (OSI) model [1]. This is an often used conceptual model which allows us to create highly interoperable, open, and scalable communication systems by categorising the technological functions required by the network into a stack of layers that are numbered from 1 to 7, each of which has a set of interfaces to communicate with its directly adjacent layers (see Table 3.1).
This model is very powerful, as by isolating a set of technological functions into a specific layer that has interfaces to talk to the layers it is directly adjacent to, changes to any one specific layer do not need to impact the operation of any other layer in the system, allowing asynchronous evolution of the entire stack of network technologies where one or more layers experience more rapid change than their neighbouring layers. An example of this can be seen with each new generation of Wi‐Fi; significant advances in speed can be achieved by changing only layers 1 and 2, without any of the upper layers being aware of the change. Consider what the situation would be like if the entire network technology stack had to be remade to accommodate a change at any layer. The stack that would result would be highly inflexible as even an isolated change would require significant work. Over time, this would become a key barrier to keeping up with the edge of technological progress and prevent open contribution to the stack by other companies or individuals, limiting innovation.
Now that the reasoning behind the OSI model has been established, we will briefly describe the functionality of each layer as it is relevant to infrastructure edge computing, and the number of each layer will be used throughout this book to quickly refer to the concepts that they represent. In this example, we will be taking the perspective of a network endpoint receiving data that has been transmitted across the network, so our progression will be from layer 1 through layer 7. When considering the process of sending data, this progress through the layers is reversed in order as data flows from layer 7 down to layer 1 to be transmitted across the physical network.