EEG Signal Processing and Machine Learning. Saeid Sanei
Читать онлайн книгу.is zero for Gaussian distributed signals. Often the signals are considered ergodic, hence the statistical averages can be assumed identical to time averages so that they can be estimated with time averages.
The negentropy of a signal x(n) [11] is defined as:
(4.4)
where x Gauss(n) is a Gaussian random signal with the same covariance as x(n) and H(.) is the differential entropy [12] defined as:
and p(x(n)) is the signal distribution. Negentropy is always nonnegative.
Entropy, by itself, is an important measure of EEG behaviour particularly in the cases in which the brain synchronization changes such as when brain waves become gradually more synchronized when the brain approaches the seizure onset. It is also a valuable indicator of other neurological disorders presented in psychiatric diseases.
By replacing the probability density function (pdf) with joint or conditional pdfs in Eq. (4.5), joint or conditional entropy is defined respectively. In addition, there are new definitions of entropy catering for neurological applications such as the multiscale fluctuation‐based dispersion entropy defined in [13], which is briefly explained in this chapter, and those in the references herein.
The KL distance between two distributions p 1 and p 2 is defined as:
It is clear that the KL distance is generally asymmetric, therefore by changing the position of p 1 and p 2 in this Eq. (4.6) the KL distance changes. The minimum of the KL distance occurs when p1 (z) = p2 (z).
4.4 Signal Segmentation
Often it is necessary to label the EEG signals into the segments of similar characteristics particularly meaningful to clinicians and for the assessment by neurophysiologists. Within each segment, the signals are considered statistically stationary usually with similar time and frequency statistics. As an example, an EEG recorded from an epileptic patient may be divided into three segments of preictal, ictal, and postictal segments. Each may have a different duration. Figure 4.1 represents an EEG sequence including all the above segments.
Figure 4.1 An EEG set of tonic–clonic seizure signals including three segments of preictal, ictal, and postictal behaviour.
In segmentation of EEGs time or frequency properties of the signals may be exploited. This eventually leads to a dissimilarity measurement denoted as d(m) between the adjacent EEG frames where m is an integer value indexing the frame and the difference is calculated between the m and (m − 1)th (consecutive) signal frames. The boundary of the two different segments is then defined as the boundary between the m and (m − 1)th frames provided d(m) > ηT , and ηT is an empirical threshold level. An efficient segmentation is possible by highlighting and effectively exploiting the diagnostic information within the signals with the help of expert clinicians. However, access to such experts is not always possible and therefore, there are needs for algorithmic methods.
A number of different dissimilarity measures may be defined based on the fundamentals of signal processing. One criterion is based on the autocorrelations for segment m defined as:
(4.7)
The autocorrelation function of the mth length N frame for an assumed time interval n, n + 1, …, n + (N − 1), can be approximated as:
(4.8)
Then the criterion is set to:
(4.9)
A second criterion can be based on higher‐order statistics. The signals with more uniform distributions such as normal brain rhythms have a low kurtosis, whereas seizure signals or ERP signals often have high kurtosis values. Kurtosis is defined as the fourth‐order cumulant at zero time lags and related to the second‐order and fourth‐order moments as given in Eqs. (4.1)–(4.3). A second level discriminant d 2(m) is then defined as:
(4.10)
where m refers to the mth frame of the EEG signal x(n). A third criterion is defined from the spectral error measure of the periodogram. A periodogram of the mth frame is obtained by Fourier transforming of the correlation function of the EEG signal:
(4.11)
where
(4.12)
The test window sample autocorrelation for the measurement of both d 1(m) and d 3(m) can be updated through the following recursive equation (Eq. 4.13) over the test windows of size N:
and thereby computational complexity can be reduced in practise. A fourth criterion corresponds to the error energy in autoregressive (AR) ‐based modelling of the signals. The prediction error in the AR model of the mth frame is simply defined as:
where p is the prediction order and ak (m), k = 1, 2, …, p, are the prediction coefficients. For certain p the coefficients can be found directly (for example by using Durbin's method) in such a way to minimize the error (residual) between the actual and predicted signal energy. In this approach it is assumed that the frames of length N