Computational Intelligence and Healthcare Informatics. Группа авторов

Читать онлайн книгу.

Computational Intelligence and Healthcare Informatics - Группа авторов


Скачать книгу
namely, ChestNet, is proposed for detection of consolidation, a kind of lung opacity in pediatric CXR images [5]. Consolidation is one of the critical abnormalities whose detection helps in early prediction of pneumonia. Before applying model, three-step pre-processing is done to deal with the issues, namely, checking the presence of confounding variables in the image, searching for consolidation patterns instead of using histogram features, and learning is used to detect sharp edges such as ribs and spines instead of directly detecting pattern of consolidation by the CNN. ChestNet models consist of convolutional layers, batch normalization layers embedded after each convolutional layer, and two classifier layers at the last. Only two max-pooling layers were used in contrast to five layers of VGG16, and DenseNet121 in order to preserve the region of image where the consolidation pattern is spread out. Smaller size convolutional layer such as 3 × 3 learns undesirable features, so to avoid this author used 7 × 7 size convolutional layer to learn largely spread consolidation pattern.

      Multiple feature extraction technique was used by author in paper [23] for the classification of thoracic pathologies. Various classifiers such as Gaussian discriminant analysis (GDA), KNN, Naïve Bayes, SVM, Adaptive Boosting (AdaBoost), Random forest, and ELM were compared with pretrained DenseNet121 which was used for localization by generating CAM (Class Activation Map) and integrated results of different shallow and deep feature extraction algorithms such as Scale Invariant Feature Transform (SIFT), Gradient-based (GIST), Local Binary Pattern (LBP), and Histogram Oriented Gradient–based (HOG) with different classifiers have been used for final classification of various lung abnormalities. It is observed that ELM is having better F1-score than the DenseNet121.

      Two asymmetric networks ResNet and DenseNet which extract complementary unique features from input image were used to design new ensemble model known as DualCheXNet [10]. It has been the first attempt to use complementarity of dual asymmetric subnetworks developed in the field of thoracic disease classification. Two networks, i.e., ResNet and DenseNet are allowed to work simultaneously in Feature Level Fusion (FLF) module and selected features from both networks are combined in Decision Level fusion (DLF) on which two auxiliary classifiers are applied for classifying image into one of the pathologies.

      The problem of poor alignment and noise in non-lesion area of CXR images which hinders the performance of network is overcome by building three branch attention guided CNN which is discussed in [20]. It helps to identify thorax diseases. Here, AGCNN is explored which works in the same manner as radiologist wherein ResNet50 is the backbone of AGCNN. Radiologist first browse the complete image and then gradually narrows down the focus on small lesion specific region. AGCNN mainly focus on small local region which is disease specific such as in case of Nodule. AGCNN has three branches local branch, global branch, and fusion branch. If lesion region is distributed throughout the image, then the pathologies which were missed by local branch in terms of loss of information such as in case of pneumonia were captured by global branch. Global and local branches are then fuse together to fine tune the CNN before drawing final conclusion. The training of AGCNN is done in different training orders. G_LF (Global branch then Local and Fusion together), GL_F (Global and Local together followed by Fusion), GLF all together, and then G_L_F (Global, Local and Fusion separately) one after another.

      A multiple instance learning (MEL) assures good performance of localization and multi-classification albeit in case of availability of less number of annotated images is discussed in [37]. Latest version of residual network pre-act-ResNet [22] has been employed to correctly locate site of disease. Initially, model is allowed to learn information of all images, namely, class and location. Later, input annotated image is divided into four patches and model is allowed to train for each patch. The learning task becomes a completely supervised problem for an image with bounding box annotation, since the disease mark for each patch can be calculated by the overlap between the patch and the bounding box. The task is formulated as a multiple-instance learning (MIL) problem where at least one patch in the image belongs to that disease. All patches have to be disease-free if there is no illness in the picture.

      Considering orientation, rotation and tilting problems of images, hybrid deep learning framework, i.e., VDSNet by combining VGG, data augmentation, and spatial transformer network (STN) with CNN for detection of lung diseases such as asthma, TB, and pneumonia from NIH CXR dataset is presented in [7]. The comparison is performed with CapsNet, vanilla RGB, vanilla gray, and hybrid CNN VGG and result shows that the VDSNet achieved better accuracy of 73% than other models but is time consuming. In [67], a technique of using predefined deep CNN, namely, AlexNet, VGG16, ResNet18, Inception-v3, DenseNet121 with weights either initialized from ImageNet dataset or initialized with random values from scratch is adopted for classification of chest radiographs into normal and abnormal class. Pretrained weights of ImageNet performed better than random initialized weights from scratch. Deeper CNN works better for detection or segmentation kind of task rather than binary classification. ResNet outperformed training from scratch for moderate sized dataset (example, 8,500 rather than 18,000).

      Thoracic


Скачать книгу