Artificial Intelligence and Quantum Computing for Advanced Wireless Networks. Savo G. Glisic
Читать онлайн книгу.left-parenthesis x Subscript d Baseline right-parenthesis"/>
Then, the relevance of a feature dimension xd depends on the sign of the term in Eq. (4.5). This is for many classification problems a more plausible interpretation. This second example shows that LRP is able to deal with nonlinearities such as the feature space mapping φd to some extent and how an example of LRP satisfying Eq. (4.2) may look like in practice.
The above example gives an intuition about what relevance R is, namely, the local contribution to the prediction function f(x). In that sense, the relevance of the output layer is the prediction itself: f(x). This first example shows what one could expect as a decomposition for the linear case. The linear case is not a novelty; however, it provides a first intuition. A more graphic and nonlinear example is given in Figure 4.1. The upper part of the figure shows a neural‐network‐shaped classifier with neurons and weights wij on connections between neurons. Each neuron i has an output ai from an activation function.The top layer consists of one output neuron, indexed by 7. For each neuron i we would like to compute a relevance Ri . We initialize the top layer relevance
(4.6)
(4.7)
We will make two assumptions for this example. First, we express the layer‐wise relevance in terms of messages
Figure 4.1 (a) Neural network (NN) as a classifier, (b) NN during the relevance computation.
For example,
The difference between condition (4.9) and definition (4.8) is that in condition