Artificial Intelligence and Quantum Computing for Advanced Wireless Networks. Savo G. Glisic
Читать онлайн книгу.alt="sigma-summation Underscript i Endscripts upper R Subscript i left-arrow j Superscript left-parenthesis l comma l plus 1 right-parenthesis Baseline equals upper R Subscript j Superscript left-parenthesis l plus 1 right-parenthesis Baseline dot left-parenthesis 1 minus StartFraction b Subscript j Baseline Over z Subscript j Baseline EndFraction right-parenthesis"/>
where the multiplier accounts for the relevance that is absorbed (or injected) by the bias term. If necessary, the residual bias relevance can be redistributed onto each neuron xi. A drawback of the propagation rule of Eq. (4.18) is that for small values zj , relevances Rj ← j can take unbounded values. Unboundedness can be overcome by introducing a predefined stabilizer ε ≥ 0:
(4.20)
The conservation law then becomes
(4.21)
where we can observe that some further relevance is absorbed by the stabilizer. In particular, relevance is fully absorbed if the stabilizer ε becomes very large.
An alternative stabilizing method that does not leak relevance consists of treating negative and positive pre‐activations separately. Let
(4.22)
where α + β = 1. For example, for αβ = 1/2, the conservation law becomes
(4.23)
which has a similar form as Eq. (4.19). This alternative propagation method also allows one to manually control the importance of positive and negative evidence, by choosing different factors α and β.
Once a rule for relevance propagation has been selected, the overall relevance of each neuron in the lower layer is determined by summing up the relevances coming from all upper‐layer neurons in agreement with Eqs. (4.8) and (4.9):
(4.24)
Figure 4.3 Relevance propagation (heat map; relevance is presented by the intensity of the red color).
Source: Montavon et al. [92].
The relevance is backpropagated from one layer to another until it reaches the input pixels x(d), and where the relevances
4.3 Rule Extraction from LSTM Networks
In this section, we consider long short term memory networks (LSTMs), which were discussed in Chapter 3, and described an approach for tracking the importance of a given input to the LSTM for a given output. By identifying consistently important patterns of words, we are able to distill state‐of‐the‐art LSTMs on sentiment analysis and question answering into a set of representative phrases. This representation is then quantitatively validated by using the extracted phrases to construct a simple rule‐based classifier that approximates the output of the LSTM.
Word importance scores in LSTMS: Here, we present a decomposition of the output of an LSTM into a product of factors, where each term in the product can be interpreted as the contribution of a particular word. Thus, we can assign importance scores to words according to their contribution to the LSTM’s prediction. We have introduced the basics of LSTM networks in the Chapter 3. Given a sequence of word embeddings x1, xT ∈ ℝd, an LSTM processes one word at a time, keeping track of cell and state vectors (c1, h1), (cT, hT), which contain information in the sentence up to word i. ht and ct are computed as a function of xt, ct − 1 using the updates given by Eq.