Software Networks. Guy Pujolle
Читать онлайн книгу.but ones which belong to different sub-networks. The two controllers may be compatible but they may also be incompatible and, in this case, a signaling gateway is needed.
Figure 2.7 shows a number of important open source programs that have been developed to handle a layer or an interface. Starting from the bottom, in the virtualization layer, network virtual machines were standardized by the ETSI in a working group called NFV (Network Functions Virtualization), which we will revisit in detail later on. Here, let us simply note that the aim of NFV is to standardize all network functions with a view to virtualizing them and facilitating their execution in different places from the original physical machine. To complete this standardization, an open source software code has been developed which allows full compatibility between virtual machines.
The control plane includes the controllers. One of the best known is OpenDaylight – an open source controller developed collaboratively by numerous companies. This controller, as we will see later on, contains a large number of modules, often developed in the interest of the particular company that carried out that work. Today, OpenDaylight is one of the major pieces in the Cisco architecture, as well as that of other manufacturers. Later on, we will detail most of the functions of OpenDaylight. Of course, there are many other controllers – open source ones as well – such as OpenContrail, ONOS, Flood Light, etc.
The uppermost layer represents the Cloud management systems. It is roughly equivalent to the operating system on a computer. It includes OpenStack, which was the system which was most favored by the developers, but many other products exist, both open source and proprietary.
The southbound interface is often known through its standard naming from the ONF: OpenFlow. OpenFlow is a signaling system between the infrastructure and the controller. This protocol was designed by Nicira and has led to a de facto standard from the ONF. OpenFlow transports information that properly defines the stream in question to open, modify or close the associated path. OpenFlow also determines the actions to be executed in one direction or the other over the interface. Finally, OpenFlow facilitates the feeding back of measurement information performed over the different communication ports, so that the controller has a very precise view of the network.
The northbound and southbound interfaces have been standardized by the ONF to facilitate compatibility between Cloud providers, the control software and the physical and virtual infrastructure. Most manufacturers conform to these standards to a greater or lesser degree, depending on the interests in the range of hardware already operating. Indeed, one of the objectives is to allow companies that have an extensive range to be able to upgrade to the next generation of SDN without having to purchase all new infrastructure. A transitional period is needed, during which we may see one of two scenarios:
– the company adapts the environment of the SDN to its existing infrastructure. This is possible because the software layer is normally independent of the physical layer. The company’s machines must be compatible with the manufacturer’s hypervisor or container products. However, it is important to add to or update the infrastructure so that its power increases by at least 10%, but preferably 20 or 30%, to be able to handle numerous logical networks;
– the company implements the SDN architecture on a new part of its network, and increases it little by little. This solution means that both the old generation of the network and the new need to be capable of handling the demand.
Figure 2.7. Example of open source developments
Now let us go into detail about the different layers. Once again, we will start with the bottom layer. The physical layer is quintessential and is designed around the most generic possible hardware devices in order to obtain the best cost/benefit ratio, but these devices are still suited for networking, i.e. with the necessary communication cards and the appropriate capacity to host the intended software networks. It is clear that performance is of crucial importance, and that the physical infrastructure has to be able to cope with what is at stake. One of the priorities, though, is to minimize the physical layer to avoid consuming too much in the way of energy resources. With this goal in mind, the best thing to do is to have an algorithm to appropriately place the virtual machines, with a view to putting the highest possible number of physical machines on standby outside of peak hours. Urbanization is becoming a buzzword in these next-generation networks. Unfortunately, urbanization algorithms are still in the very early stages of development, and are not capable of dealing with multi-criteria objectives: only one criterion is taken into account – e.g. performance. Algorithms can be executed based on the criteria of load balancing. It can also take into account the energy consumption by doing the opposite of load balancing – i.e. channeling the data streams along common paths in order to be able to place a maximum number of physical machines on standby. The difficulty in the latter case is being able to turn the resources back on as the workload of the software networks increases again. Certain machines, such as virtual Wi-Fi access points, are difficult to wake from standby mode when external devices wish to connect. First, we need electromagnetic sensors that are capable of detecting these mobile terminals, and second, we need to send an Ethernet frame on a physical wire to the Wi-Fi access point with the function Wake-on-LAN.
2.3. NFV (Network Functions Virtualization)
The purpose of NFV (Network Functions Virtualization) is to decouple the network functions from the network equipment. This decoupling enables us to position the software performing the control of a device on a different machine than the device itself. This means we can place the operational programs of a machine in a datacenter within a Cloud. Standardization is being done by a working group from the ETSI, which is a European standardization body, but in this particular case, the project is open to all operators and device manufacturers from across the world. Over 200 members are taking part in this standardization effort. The objective of this initiative is to define an architecture that is able to virtualize the functions included in the networking devices, and to clearly define the challenges needing to be overcome. The standardization tasks are being carried out by five separate working groups, described in detail below.
The first group, “Architecture of the Virtualization”, has the objective of producing a reference architecture for a virtualized infrastructure and points of reference to interconnect the different components of that architecture. The second group, “Management and Orchestration”, is charged with defining the rollout, instantiation, configuration and management of network services that use the NFV infrastructure. The third group, “Software Architecture”, aims to define the reference software architecture for the execution of virtualized functions. The fourth group, “Security Expert Group”, as the name suggests, works on the security of the software architecture. Finally, the last group, “Performance and Portability Expert Group”, needs to provide solutions to optimize performances and manage the portability of the virtual machines representing the functions of this new environment.
Figure 2.8 shows a number of appliances handled by NFV, such as firewalls or SBCs (Session Border Controllers). These functions are moved to a powerful physical machine or to a virtual machine in a datacenter.
Figure 2.8. NFV (Network Functions Virtualization)
Figure 2.9 goes into a little more detail as to the machines whose functions are externalized. ETSI’s aim is to standardize the virtual machines used to perform these functions. The power of the server determines the rate at which the functions are executed. This makes sense for the functions of a firewall or deep packet inspection (DPI), which require extremely high processing power to determine the applications passing through the network in real time.
Figure 2.9. NFV machines
2.4. OPNFV