EEG Signal Processing and Machine Learning. Saeid Sanei

Читать онлайн книгу.

EEG Signal Processing and Machine Learning - Saeid Sanei


Скачать книгу
4.11 An adaptive noise canceller.

      (4.91)equation

      where w is the Wiener filter coefficient vector. Using the orthogonality principle [39] the final form of the mean squared error will be:

      (4.92)equation

      where E(.) represents statistical expectation:

      (4.93)equation

      and

      (4.94)equation

      By taking the gradient with respect to w and equating it to zero we have:

      (4.95)equation

      As R and p are usually unknown the above minimization is performed iteratively by substituting time averages for statistical averages. The adaptive filter in this case, decorrelates the output signals. The general update equation is in the form of:

      (4.96)equation

      where n is the iteration number which typically corresponds to discrete‐time index. Δ w (n) has to be computed such that E[e(n)]2 reaches to a reasonable minimum. The simplest and most common way of calculating Δw(n) is by using gradient descent or steepest descent algorithm [39]. In both cases, a criterion is defined as a function of the squared error (often called a performance index) such as η (e(n)2), such that it monotonically decreases after each iteration and converges to a global minimum. This requires:

      (4.97)equation

      (4.99)equation

      Using the least mean square (LMS) approach, ∇w (η(w)) is replaced by an instantaneous gradient of the squared error signal, i.e.:

      (4.100)equation

      Therefore, the LMS‐based update equation is

      (4.101)equation

      Also, the convergence parameter, μ, must be positive and should satisfy the following:

      (4.102)equation

      where λ max represents the maximum eigenvalue of the autocorrelation matrix R . The LMS algorithm is the most simple and computationally efficient algorithm. However, the speed of convergence can be slow especially for correlated signals. The recursive least‐squares (RLS) algorithm attempts to provide a high speed stable filter, but it is numerically unstable for real‐time applications [40, 41]. Defining the performance index as:

      Then, by taking the derivative with respect to w we obtain

      (4.105)equation

      where

      (4.106)equation

      and

      (4.107)equation

      From this equation:

      (4.108)equation

      The RLS algorithm performs the above operation recursively such that P and R are estimated at the current time n as:

      (4.109)equation

      (4.110)equation

      (4.111)equation

      where M represents the finite impulse response (FIR) filter order. Conversely:

      (4.112)equation

      which can be simplified using the matrix inversion lemma [42]:

      (4.113)equation

      and finally, the update equation can be written as:

      where

      (4.115)equation

      and the error e(n) after each iteration is recalculated as:

      (4.116)Скачать книгу