EEG Signal Processing and Machine Learning. Saeid Sanei

Читать онлайн книгу.

EEG Signal Processing and Machine Learning - Saeid Sanei


Скачать книгу
algorithm [29]. The MVAR model and its application in representation of what is called a direct transfer function (DTF), and its use in quantification of signal propagation within the brain, will be explained in detail in Chapter 8 of this book. After the parameters are estimated the synthesis filter can be excited with wide sense stationary noise to generate the EEG signal samples. Figure 3.12 illustrates the simplified system.

      The prediction models can be easily extended to multichannel data. This leads to estimation of matrices of prediction coefficients. These parameters stem from both the temporal and interchannel correlations.

      3.4.1.2 Prony's Method

      Prony's method has been previously used to model EPs [30, 31]. Based on this model an EP, which is obtained by applying a short audio or visual brain stimulus to the brain, can be considered as the impulse response (IR) of a linear infinite impulse response (IIR) system. The original attempt in this area was to fit an exponentially damped sinusoidal model to the data [32]. This method was later modified to model sinusoidal signals [33]. Prony's method is used to calculate the LP parameters. The angles of the poles in the z‐plane of the constructed LP filter are then referred to the frequencies of the damped sinusoids of the exponential terms used for modelling the data. Consequently, both the amplitude of the exponentials and the initial phase can be obtained following the methods used for an AR model, as follows.

      Based on the original method we can consider the output of an AR system with zero excitation to be related to its IR as:

      (3.44)equation

      Therefore, the model coefficients are first calculated using one of the methods previously mentioned in this section, i.e. images, where:

      (3.45)equation

      (3.46)equation

      and the resonance frequencies as

      (3.47)equation

      where Re(.) and Im(.) denote respectively the real and imaginary parts of a complex quantity. The wk parameters are calculated using the fact that images or

      In vector form this can be illustrated as Rw = y , where images k = 0, 1, ⋯, p − 1, l = 1, ⋯, p denoting the elements of the matrix in the above equation. Therefore, w = R −1 y , assuming R is a full‐rank matrix, i.e. there are no repeated poles. Often, this is simply carried out by implementing the Cholesky decomposition algorithm [34]. Finally, using wk , the amplitude and initial phases of the exponential terms are calculated as follows:

      and

      (3.50)equation

      In the above solution we considered that the number of data samples N is equal to N = 2p, where p is the prediction order. For the cases where N > 2p a least‐squares (LS) solution for w can be obtained as:

      (3.51)equation

      In cases for which the data are contaminated with white noise, the performance of Prony's method is reasonable. However, for non‐white noise, the noise information is not easily separable from the data and therefore the method may not be sufficiently successful.

      As we will see in a later chapter of this book, Prony's algorithm has been used in modelling and analysis of audio and visual EPs (AEP and VEP) [31, 35].

      3.4.2 Nonlinear Modelling

      An approach similar to AR or MVAR modelling in which the output samples are nonlinearly related to the previous samples, may be followed based on the methods developed for forecasting financial growth in economical studies.

      In the generalized autoregressive conditional heteroskedasticity (GARCH) method [36], each sample relates to its previous samples through a nonlinear (or sum of nonlinear) function(s). This model was originally introduced for time‐varying volatility (honoured with the Nobel Prize in Economic sciences in 2003).

      Nonlinearities in the time series are declared with the aid of the McLeod and Li [37] and (Brock, Dechert, and Scheinkman) tests [38]. However, both tests lack the ability to reveal the actual kind of nonlinear dependency.

      Generally, it is not possible to discern whether the nonlinearity is deterministic or stochastic in nature, nor can we distinguish between multiplicative and additive dependencies. The type of stochastic nonlinearity may be determined based on Hsieh test [39]. The additive and multiplicative dependencies can be discriminated by using this test. However, the test itself is not used to obtain the model parameters.

      Considering the input to a nonlinear system to be u(n) and the generated signal as the output of such a system to be x(n), a restricted class of nonlinear models suitable for the analysis of such process is given by:

      (3.52)equation

      Multiplicative dependence means nonlinearity in the variance, which requires the function h(.) to be nonlinear; additive dependence, conversely, means nonlinearity in the mean, which holds if the function g(.) is nonlinear. The conditional statistical mean and variance are respectively defined as:

      (3.53)Скачать книгу