Artificial Intelligence and Quantum Computing for Advanced Wireless Networks. Savo G. Glisic

Читать онлайн книгу.

Artificial Intelligence and Quantum Computing for Advanced Wireless Networks - Savo G. Glisic


Скачать книгу
NARMAX (p, q, r), with associated predictor

      (3.59)ModifyingAbove y With ampersand c period circ semicolon left-parenthesis k right-parenthesis equals upper Theta left-parenthesis sigma-summation Underscript i equals 1 Overscript p Endscripts a Subscript i Baseline y left-parenthesis k minus i right-parenthesis plus sigma-summation Underscript j equals 1 Overscript q Endscripts b Subscript j Baseline ModifyingAbove e With ampersand c period circ semicolon left-parenthesis k minus j right-parenthesis plus sigma-summation Underscript s equals 1 Overscript r Endscripts c Subscript normal s Baseline u left-parenthesis k minus s right-parenthesis right-parenthesis comma

      which again exploits feedback.

      3.4.2 Feedback Options in Recurrent Neural Networks

      (3.60)ModifyingAbove y With ampersand c period circ semicolon left-parenthesis k right-parenthesis equals upper Phi left-parenthesis y left-parenthesis k minus 1 right-parenthesis comma y left-parenthesis k minus 2 right-parenthesis comma ellipsis comma left-parenthesis k minus p right-parenthesis comma ModifyingAbove e With caret left-parenthesis k minus 1 right-parenthesis comma ellipsis comma ModifyingAbove e With caret left-parenthesis k minus q right-parenthesis right-parenthesis comma

Schematic illustration of recurrent neural network.

      State‐space representation and canonical form: Any feedback network can be cast into a canonical form that consists of a feedforward (static) network (FFSN) (i) whose outputs are the outputs of the neurons that have the desired values, and the values of the state variables, and (ii) whose inputs are the inputs of the network and the values of the state variables, the latter being delayed by one time unit.

Schematic illustration of canonical form of a recurrent neural network for prediction. Schematic illustration of recurrent neural network (RNN) architectures: (a) activation feedback and (b) output feedback.

      (3.62)ModifyingAbove y With ampersand c period circ semicolon left-parenthesis k right-parenthesis equals psi left-parenthesis s left-parenthesis k minus 1 right-parenthesis comma y left-parenthesis k minus 1 right-parenthesis comma ModifyingAbove y With ampersand c period circ semicolon left-parenthesis k minus 1 right-parenthesis right-parenthesis comma

      where φ and Ψ represent general classes of nonlinearities.

      The output of a neuron shown in Figure 3.13a can be expressed as

      (3.63)StartLayout 1st Row v left-parenthesis k right-parenthesis equals sigma-summation Underscript i equals 0 Overscript upper M Endscripts omega Subscript u comma i Baseline left-parenthesis k right-parenthesis u left-parenthesis k minus i right-parenthesis plus sigma-summation Underscript i equals 0 Overscript upper M Endscripts omega Subscript v comma j Baseline left-parenthesis k right-parenthesis v left-parenthesis k minus j right-parenthesis 2nd Row y left-parenthesis k right-parenthesis equals normal upper Phi left-parenthesis v left-parenthesis k right-parenthesis right-parenthesis EndLayout

      where ωu,i and ωv,i are the weights associated with u and v, respectively. In the case of Figure 3.13b, we have

      (3.64)StartLayout 1st Row v left-parenthesis k right-parenthesis equals sigma-summation Underscript i equals 0 Overscript upper M Endscripts omega Subscript u comma i Baseline left-parenthesis k right-parenthesis u left-parenthesis k minus i right-parenthesis plus sigma-summation Underscript i equals 0 Overscript upper M Endscripts omega Subscript y comma j Baseline left-parenthesis k right-parenthesis y left-parenthesis k minus j right-parenthesis 2nd Row y left-parenthesis k right-parenthesis equals normal upper Phi left-parenthesis v left-parenthesis k right-parenthesis right-parenthesis <hr><noindex><a href=Скачать книгу