Interconnection Network Reliability Evaluation. Neeraj Kumar Goyal

Читать онлайн книгу.

Interconnection Network Reliability Evaluation - Neeraj Kumar Goyal


Скачать книгу
of network reliability and the popular interconnection network architectures in the first two chapters. Then it discusses approaches used for reliability evaluation of such a network in Chapters 3, 4, and 5. Some new novel architectures providing better reliability and fault tolerance while being cost effective are presented in Chapter 6.

      The authors would like to thank everyone (faculty, students, and staff) from S.C. School of Quality and Reliability, IIT Kharagpur, India, for their encouragement, suggestions, and support while carrying this analysis. We would also like to thank Prof. K.B. Mishra and Martin Scrivener for their guidance and support in writing this book.

       Dr. Neeraj Kumar Goyal and Dr. S. Rajkumar

       August 2020

      1

      Introduction

      1.1 Introduction

      In recent years’ human beings have become largely dependent on communication networks, such as computer communication networks, telecommunication networks, mobile switching networks etc., for their day-to-day activities [1]. In today’s world, humans and critical machines depend on these communication networks to work properly. Failure of these communication networks can result in situations where people may find themselves isolated, helpless and exposed to hazards. It is fact that every component or system can fail and its failure probability increases with size and complexity.

      Therefore, it is essential to compute and assure the reliability of these networks, which are growing and becoming complex. Reliability modeling and computation is necessary for reliability and safety assurance of these networks [2]. It also helps to identify weak links. These weak links can be improved cost effectively using reliability design techniques. Recent developments in communication hardware industry has resulted in increasingly reliable and non-repairable (need to be replaced when got bad 1) network components. However new designs involve new components, which tend to be less reliable. A good network design [3] involving fault tolerance and redundancies can deliver better system reliability at lesser cost. This allows new designs to be released faster and works reliably even when components are not mature enough from reliability point of view.

      The computation of reliability measures [4] for a large and complex communication network, up to the desired level of accuracy, is time consuming, complex and costly. It has not been realistic to model and compute the reliability of real-life communication networks, which are quite large, using desktop computer due to large execution time and high memory requirements. Such computations are usually performed on high-end processors for critical systems only. Reliability professionals and researchers have carried out a lot of research and developed techniques to minimize these efforts and develop a practical tool for all the communication network designers [4–6].

      Earlier attempts to measure network reliability belong to two distinct classes: deterministic and probabilistic [1, 2]. The deterministic measures assumed that the network is subjected to a destructive force with complete knowledge of the network topology. The reliability is measured in terms of the least amount of damage required to make the network inoperative.

      Deterministic measures thus provide simple bounds on the reliability of the network, since they are often measured for the network’s worst-case environment. For example, in the terminal pair reliability problem, two deterministic measures of reliability are:

      1 The minimum number of edges that must be destroyed or removed to disrupt the communication between the specified nodes (s and t), which is simply the number of edges in a minimum cardinality cutset, and

      2 The minimum number of nodes that must be destroyed or removed to disrupt the communication between the specified nodes (s and t).

      Both of these measures are computable in polynomial time. However, one of the main problems with deterministic measures is these give rise to some counter intuitive notions of network reliability. For example, consider the networks shown in Figure 1.1. According to second deterministic measure of the graph’s reliability, the graphs of Figure 1.1 (a) and Figure 1.1 (b) are equally reliable since both of these require minimum three nodes to be destroyed for breaking the s-f node connectivity.

      Figure 1.1 Example networks for deterministic reliability measurement.

      However intuitively one can easily find out that graph (a) is the more reliable among the two. The same problem arises when the cardinality of a minimum (s, t)-cut set is used as a measure of unreliability. Consider the graphs shown in Figure 1.2. Both graphs (a) and (b) have a minimum cardinality for (s, t) cut of size one, which implies both networks are equally reliable.

      This clearly shows that deterministic measures of reliability are insufficient to correctly relate network components used in network layout with network reliability. Moreover, failure of network components is probabilistic in nature therefore only probabilistic measures can define system reliability appropriately.

      Figure 1.2 Another set of example networks for deterministic reliability measurement.

      Communication networks are generally modeled using network graph [3]. The network graph G (V,E) consists of a set V of n number of nodes (or vertices) and a set E of l number of edges (or links). For reliability evaluation, probabilistic graph is used which takes these sets V and E of nodes and links as random variables. In probabilistic graph of communication networks, nodes represent the computers/ switches/transceivers/routers and edges represent various types of communication links connecting these nodes. For reliability analysis, graphical models of networks are considered to be simple, efficient and effective.

      Probabilistic graph models are developed and presented in this book. Depending on the state (working or failed) of nodes (or vertices) and/or links (or edges), the network can be considered either working or failed. A general assumption of statistical independence among nodes and links failures is followed throughout. It implies that the probability of a link or node being operational is not dependent of the states of the other links or nodes in the network. The inherent assumption here is that the link failures are caused by random events which affect all network components individually.

      However, this assumption may not be completely correct while modeling a real communication network as more than one component in a particular


Скачать книгу