Artificial Intelligence and Quantum Computing for Advanced Wireless Networks. Savo G. Glisic
Читать онлайн книгу.where g is a nonlinear function. The model can be used for both scalar and vector sequences. The one time step prediction can be represented as
(3.44)
Given the focus of this chapter, we discuss how the network models studied so far can be used when the inputs to the network correspond to the time window y(k − 1) through (k − M). Using neural networks for predictions in this case has become increasingly popular. For our purposes, g will be realized with an FIR network. Thus, the prediction
(3.45)
where NM is an FIR network with total memory length M.
3.3.1 Adaptation and Iterated Predictions
The basic predictor training configuration for the FIR network is shown in Figure 3.9 with a known value of y(k − 1) as the input, and the output
Figure 3.9 Network prediction configuration.
(3.46)
where y(k) is viewed as a stationary ergodic process‚ and the expectation is taken over the joint distribution of y(k) through y(k − M). N* represents a closed‐form optimal solution that can only be approximated due to finite training data and constraints in the network topology.
Iterated predictions: Once the network is trained, iterated prediction is achieved by taking the estimate
as illustrated in Figure 3.9. Equation (3.47) can be now iterated forward in time to achieve predictions as far into the future as desired. Suppose, for example, that we were given only N points for some time series of interest. We would train the network on those N points. The single‐step estimate
3.4 Recurrent Neural Networks
3.4.1 Filters as Predictors
Linear filters: As already indicated so far in this chapter, linear filters have been exploited for the structures of predictors. In general, there are two families of filters: those without feedback, whose output depends only upon current and past input values; and those with feedback, whose output depends upon both input values and past outputs. Such filters are best described by a constant coefficient difference equation, as
where y(k) is the output, e(k) is the input, ai, i = 1, 2, … , p, are the AR feedback coefficients and bj, j = 0, 1, … , q, are the moving average (MA) feedforward coefficients. Such a filter is termed an autoregressive moving average (ARMA (p, q)) filter, where p is the order of the autoregressive, or feedback, part of the structure, and q is the order of the MA, or feedforward, element of the structure. Due to the feedback present within this filter, the impulse response – that is, the values of (k), k ≥ 0, when e(k) is a discrete time impulse – is infinite in duration, and therefore such a filter is referred to as an infinite impulse response (IIR) filter.
The general form of Eq. (3.48) is simplified by removing the feedback terms as
(3.49)
Such a filter is called MA (q) and has an FIR that is identical to the parameters bj, j = 0, 1, … , q. In digital signal processing, therefore, such a filter is called an FIR filter. Similarly, Eq.