EEG Signal Processing and Machine Learning. Saeid Sanei

Читать онлайн книгу.

EEG Signal Processing and Machine Learning - Saeid Sanei


Скачать книгу
book we will also see that the recent works in brain–computer interfacing (BCI) [1] have been focused upon the development of advanced signal processing as well as the vast advances in cooperative networking and deep learning tools and algorithms.

      Modelling of neural activities is probably more difficult than modelling the function of any other organ. However, some simple models for generating EEG signals have been proposed. Some of these models have also been extended to include generation of abnormal EEG signals.

      Localization of brain signal sources is another very important field of research [2]. In order to provide a reliable algorithm for localization of the sources within the brain sufficient knowledge about both propagation of electromagnetic waves, and how the information from the measured signals can be exploited in separation and localization of the sources within the brain is required. The sources might be considered as magnetic dipoles for which the well known inverse problem has to be solved, or they can be considered as distributed current sources.

      Epilepsy monitoring, detection, and prediction have also attracted many researchers. Dynamical analysis of time series together with application of blind separation of the signal sources has enabled prediction of focal epilepsies from the scalp EEGs. Conversely, application of time–frequency (TF) domain analysis for detection of the seizure in neonates has paved the way for further research in this area. An important recent contribution in the field of epilepsy is how to model the pathways between intracranial epileptiform discharges (mainly during the interictal period) and the scalp electrodes. Recent developments in this area have been made by introducing some new methods such as the multiview approach and dictionary learning [4–6] as well as deep neural network structures [7, 8] which enable capturing a considerable percentage of so‐called interictal epileptiform discharges (IEDs) from over the scalp. This technique is described in Chapters 7 and 11 of this book.

      In the following sections most of the tools and algorithms for the above objectives are explained and the mathematical foundations discussed. The application of these algorithms to analysis of the normal and abnormal EEGs however will follow in the later chapters of this book. The reader should also be aware of the required concepts and definitions borrowed from linear algebra. Further details of which can be found in [9]. Throughout this chapter and the reminder of this book continuous time is denoted by ‘t’ and discrete time, with normalized sampling period T = 1, with ‘n’. In some illustrations however the actual timings in seconds are presented.

      The head as a mixing medium combines EEG signals which are locally generated within the brain at the sensor positions. As a system, the head may be more or less susceptible to such sources in different situations. Generally, an EEG signal can be considered as the output of a nonlinear system, which may be characterized deterministically. This concept represents the fundamental difference between EEG and magnetoencephalography (MEG) signals. Unlike in EEG, for MEG the resistivity (or conductivity) of the head has significantly less effect on the magnetic field although the slight nonlinearity in MEG stems from minor differences in head tissue permittivity values.

      Electromagnetic source characterization and localization requires accurate volume conductor models representing head geometry and the electrical conductivity field. It has been shown that, despite extensive research, the measurements of head tissue parameters are inconsistent and vary in different experiments [10]. There is more discussion on this concept in Chapter 10 of this book where we introduce brain source localization.

      The changes in brain metabolism as a result of biological and physiological phenomena in the human body can change the process of combining the EEG source signals. Some of these changes are influenced by the activity of the brain itself. These effects make the system nonlinear. Analysis of such a system is very complicated and up to now nobody has fully modelled the system to aid in the analysis of brain signals.

      Nonstationarity of the signals can be quantified by evaluating the changes in signal distribution over time. In a strict‐sense stationary process the signal distribution remains the same for different time intervals. However, often wide‐sense stationarity is not required and therefore, it is sufficient to have various statistics such as mean and variance fixed (or without significant change) over time.

      Although in many applications the multichannel EEG distribution is considered as multivariate Gaussian, the mean and covariance properties generally change from segment to segment. As such EEGs are considered stationary only within short intervals and are generally quasi‐stationary. Although this interval can change due to the rapid changes in the brain state such as going from closed eye to open eye, sleep to wakefulness, normal to seizure, change in alertness, brain responding to a stimulation in the form of event‐related potential (ERP) and evoked potential (EP) signals, eye blinking, and emotion change, in practise, a 10 seconds window of EEG is considered stationary.

      The change in the distribution of the signal segments can be measured in terms of both the parameters of a Gaussian process and the deviation of the distribution from Gaussian. The non‐Gaussianity of the signals can be checked by measuring or estimating some higher‐order moments such as skewness, kurtosis, negentropy, and Kullback–Leibler (KL) distance (aka KL divergence).

      Skewness is a measure of asymmetry, or more precisely, the lack of symmetry in distribution. A distribution is symmetric if it looks the same to the left and right of its midline or mean point. The skewness is defined for a real signal as

      where μ and σ are respectively, the mean and variance and E denotes statistical expectation. If the distribution is more to the right of the mean point the skewness is negative, and vice versa. For a symmetric distribution such as Gaussian, the skewness is zero.

      Kurtosis is a measure for showing how peaked or flat a distribution is relative to a normal distribution. That is, datasets with high kurtosis tend to have a distinct peak near the mean, decline rather rapidly, and have heavy tails. Datasets with low kurtosis tend to have a flat top near the mean rather than a sharp peak. A uniform distribution would be the extreme case. The kurtosis for a signal x(n) is defined as:

      (4.2)equation

      which


Скачать книгу