Computational Intelligence and Healthcare Informatics. Группа авторов
Читать онлайн книгу.pathology detection not only restricted from CXR images but can also be done from video data of lung sonography. Deep learning approach for detection of COVID-19–related pathologies from Lung Ultrasonography is developed in [51]. By applying the facts that the augmented Lung Ultrasound (LUS) images improve the performance of network [62] in detecting healthy and ill patient and keeping consistencies in perturbed and original images, hence robust and more generalized network can be constructed [52, 55]. To do so, Regularized Spatial Transformer Network (Reg-STN) is developed. Later, CNN and spatial transformer network (STN) are jointly trained using ADAMS optimizer. Network lung sonography videos of 35 patients from various clinical centers from Italy were captured and then divided into 58,924 frames. The localization of COVID-19 pathologies were detected through STN which is based on the concept that the pathologies are located in a very small portion of image therefore no need to consider complete image.
A three-layer Fusion High Resolution Network (FHRNet) has been applied for feature extraction and fusion CNN is adopted for classifying pathologies in CXR is presented in [26]. FHRNet helped in reducing noise and highlighting lung region. Moreover, FHRN has three branches: local feature extraction, global feature extraction, and feature fusion module wherein local and global feature extraction network finds probabilities of one of the 14 classes. Input given to local feature extractor is a small lung region obtained by applying mask generated from global feature extractor. Two HRNets are adjusted to obtain prominent feature from lung region and whole image. HRNet is connected to global feature extraction layer through feature fusion layer having SoftMax classifier at the end which helps in classifying input image into one of the 14 pathologies. Another deep CNN consisting of 121 layer is developed to detect 5 different chest pathologies: Consolidation, Mass, Pneumonia, Nodule, and Atelectasis [43] entropy, as a loss function is utilized and achieved better AUC-ROC values for Pneumonia, Nodule, and Atelactasis than the model by Wang et al. [70].
Recently, due to cataclysmic outbreak of COVID-19, it is found by researchers that, as time passes, the lesions due to infection of virus spread more and more. The biggest challenge at such situations, however, is that it takes a lot of valuable time and the presence of medical specialists in the field to analyse each X-ray picture and extract important findings. Software assistance is therefore necessary for medical practitioners to help identify COVID-19 cases with X-ray images. Therefore, researchers have tried their expertise to design deep learning models on the data shared word wide to identify different perspective of spread of this virus in the chest. Many authors have created augmented data due to unavailability of enough CXRs images and applied deep learning models for detecting pneumonia caused due to COVID virus and other virus induced pneumonia. Author designed a new two-stage based deep learning model to detect COVID-induced pneumonia [31]. At the first stage, clinical CXR are given as input to ResNet50 deep network architecture for classification into virus induced pneumonia, bacterial pneumonia and normal cases. In addition, as COVID-19–induced pneumonia is due to virus, all identified cases of viral pneumonia are therefore differentiated with ResNet101 deep network architecture in the second stage, thus classifying the input image into COVID-19–induced pneumonia and other viral pneumonia. This two-stage strategy is intended to provide a fast, rational, and consistent computer-aided solution. A binary classifier model is proposed for classifying CXR image into COVID and non-COVID category along with multiclass model for COVID, non-COVID, and pneumonia classes in [45]. Authors adopted DarkNet model as a base and proposed an ensemble model known as DarkCovidNet with 17 convolutional layers, 5 Maxpool layers with different varying size filters such as 8, 16, and 32. Each convolutional layer is followed by BatchNorm and LeakyReLU operations here LeakyReLU prohibits neurons from dying. Adams optimizer was used for weight updates with cross entropy loss function. Same model was used for binary as well multiclass classification and the binary class accuracy of 98.08% and multiclass accuracy of 87.02% is reported. Another CNN with softmax classifier model is implemented for classification of ChexNet dataset into COVID-19, Normal, and Pneumonia class and is compared with Inception Net v3, Xception Net, and ResNext models [32]. In order to handle irregularities in x-ray images, a DeTraC (Decompose-Transfer-Compose) model is proposed [1] which consists of three phases, namely, deep local feature extraction, training based on gradient descent optimization and class refinement layer for final classification into COVID-19, and normal class. DeTraC achieved accuracy of 98.23 with use of VGG19 pretrained ImageNet CNN model.
2.4 Comparison of Existing Models
In this section, existing models are compared on many different parameters as listed in Table 2.2.
1 Model parameters: type of model used, input image size, number of layers epoch, loss function used, and accuracy.
2 Accuracy achieved for all 14 different pathologies and dataset used for experimentation.
3 Other metrics: model used, specificity, sensitivity, F1-score, precision, and type of pathology detected.
4 On the basis of hardware and software used and input image size.
Table 2.2 shows datasets and pathologies detected by various DL models. It is observed that ChestX-ray14 is the most preferred dataset which can be used for training and testing of newly implemented models in the future. Cardiomegaly and Edema are easy to detect as compared to other pathologies because of their spatially spread out symptom. In addition, Table 2.3 shows comparison among different models on the basis of AUC score for 14 different chest pathologies. It is observed that cardiomegaly is having the highest value of AUC score obtained by most ensemble models. Edema, Emphysema, and Hernia are the pathologies following cardiomegaly which are having better AUC. CheXNext [59] is able to detect Mass, Pneumothorax, and Edema more accurately which other models cannot detect.
Table 2.4 highlights the comparison between various models on the basis of other performance measure. Most of the models have used accuracy, F1-score, specificity, sensitivity, and PPV and NPV parameters for comparison with other models. Pre-trained networks like AlexNet, VGG16, VGG19, DenseNet, and ResNet121 trained from scratch achieved better accuracy than those whose parameters are initialized from ImageNet because ImageNet has altogether different features than CXR images.
Table 2.5 shows hardware used by different models along with size of input image in terms of pixels and datasets used. Due to computationally intensive task, high definition hardware of NVIDIA card with larger size RAM has been used.
Table 2.2 Comparison of different deep learning models.
Ref. | Model used | Dataset | No. layers | Epoch | Activation function | Iterations | Pathology detected |
---|---|---|---|---|---|---|---|
[23] | DenseNet-121 | ChestX-ray14 | 121 | - | Softmax | 50,000 | 14 chest pathologies |
[67] | Pretrained CNNs: |
ChestX-ray14
|