Machine Learning Algorithms and Applications. Группа авторов
Читать онлайн книгу.day and night in June, 2020.Figure 1.10 Heat map for all parameters for 3 days and nights in December, 2017.Figure 1.11 Heat map for all parameters for 3 days and nights in June, 2020.Figure 1.12 Predicted values for O3 for Anand Vihar, New Delhi.Figure 1.13 Predicted values for PM10 for Sector 62, Noida.Figure 1.14 Pollution levels in major Indian cities.
2 Chapter 2Figure 2.1 Adding a key marker on the silkworm egg sheet.Figure 2.2 Silkworm egg classes: hatched eggs and unhatched eggs.Figure 2.3 Core CNN model.Figure 2.4 Foreground-background segmentation of entire silkworm egg sheet (a) (input) and (b) (output).Figure 2.5 CNN training model to predict egg location in terms of pixel values.Figure 2.6 Result of egg location CNN model.Figure 2.7 Result of egg classification generated by the proposed method.
3 Chapter 3Figure 3.1 Deep Neural Network architecture.Figure 3.2 Framework of the proposed methodology.Figure 3.3 Distribution of average wind speed: (a) distribution chart and (b) histogram.Figure 3.4 The architecture of the proposed DNN model.Figure 3.5 Prediction accuracy based on (a) RMSE, (b) MAE, and (c) R2.Figure 3.6 Comparison of predicted against the actual values...Figure 3.7 Comparison of prediction accuracy of DNN and other models...
4 Chapter 4Figure 4.1 ResNet module (adapted from....Figure 4.2 Bridge connection in ResNet (adapted from....Figure 4.3 Squeeze-and-Excitation block (adapted from....Figure 4.4 Modified bridge...Figure 4.5 (a) Training losses plotted for depth of 20 layers. (b) Training losses plotted for depth of 32 layers.Figure 4.6 (a) Training losses plotted for depth of 44 layers. (b) Training losses plotted for depth of 56 layers.Figure 4.7 Training losses plotted for depth of 110 layers.
5 Chapter 5Figure 5.1 Uncovering the network layers for final computation.Figure 5.2 Terms attach with DS.Figure 5.3 ANN serves as a base for DL.Figure 5.4 Flow for pre-trained network-based learning model....Figure 5.5 Typical CNN process...Figure 5.6 Primitive RNN architecture.Figure 5.7 Graph-based representation in recursive NN.Figure 5.8 3 × 3 filter convolve over 6 × 6 image matrix.Figure 5.9 Pooling reduces the size of transition matrix.Figure 5.10 Multi-layer neural network.
6 Chapter 6Figure 6.1 Rise in issuing of credit cards during 2012–2016.Figure 6.2 Proposed two-stage credit scoring model.Figure 6.3 Comparative graph on various credit scoring datasets.
7 Chapter 7Figure 7.1 Proposed framework.Figure 7.2 Plot of histogram comparison for shot boundary detection.Figure 7.3 Key frames extracted by the proposed framework: (a) before post-processing and (b) after post-processing.Figure 7.4 (a) Proposed system summary. (b) VSUMM summary.Figure 7.5 Performance of proposed framework based on (a) average compression ratio and (b) average computation time.Figure 7.6 Comparison of proposed system with existing systems based on quantitative metrics.Figure 7.7 Performance of proposed system based on qualitative measures.
8 Chapter 8Figure 8.1 Workflow diagram describing the approach to ECG classification used in the paper.Figure 8.2 Classification accuracies comparison between XGBoost and AdaBoost classifiers.Figure 8.3 Comparison between proposed work and other existing literature.
9 Chapter 9Figure 9.1 Microarray technology.Figure 9.2 Basic information for gene expression, (a) how gene is expressed, and (b) external effect by gene expression to produced diseases.Figure 9.3 Foundation of GSA in (a) mass attraction, and (b) flowchart of GSA.Figure 9.4 Proposed model.Figure 9.5 Fitness evaluation.Figure 9.6 Fitness vs. iteration for (a) Prostate data, (b) DLBCL data, (c) Gastric cancer data, (d) Lymphoma and leukemia, and (e) Child all data.Figure 9.7 Accuracy vs. Classifier for (a) Prostate data, (b) DLBCL data, (c) Lymphoma and leukemia data, (d) Gastric cancer data, and (e) Child all data.Figure 9.8 Heat map for (a) Child all data, (b) DLBCL data, (c) Prostate data, (d) Gastric cancer data, and (e) Lymphoma and leukemia data.
10 Chapter 10Figure 10.1 Sample images from both the employed databases.Figure 10.2 ROC curves for same-spectral matchings of (top) PolyU and (bottom) Cross-Eyed.Figure 10.3 ROC curves for cross-spectral matchings of (top) PolyU and (bottom) Cross-Eyed.Figure 10.4 Bar charts for EER values after feature-level fusion.
11 Chapter 11Figure 11.1 Dataset description.Figure 11.2 Explained variance ratio of fitted principal components.Figure 11.3 ANN architecture.Figure 11.4 Rectified linear units.Figure 11.5 Sigmoid function.Figure 11.6 Random forest architecture.Figure 11.7 Model accuracy.Figure 11.8 Model loss.Figure 11.9 Accuracy of different models.Figure 11.10 ROC curve XG Boost.Figure 11.11 ROC curve Random Forest.
12 Chapter 12Figure 12.1 Sample images from the NIST DB-4 dataset showing all five classes.Figure 12.2 Results after applying the SURF algorithm...Figure 12.3 Left: Original, raw fingerprint image. Right: Normalized image.Figure 12.4 Left: Original, raw fingerprint image. Right: Estimated local orientation.Figure 12.5 Left: Tented Arch fingerprint. Right: Whorl fingerprint...Figure 12.6 Confusion matrix. The diagonal elements show the number of correctly predicted fingerprints of each class.Figure 12.7 The outputs of the 1st to 5th convolutional layers...Figure 12.8 (a–c) The raw fingerprint image from the tented arch class...
13 Chapter 13Figure 13.1 Seven basic emotional expressions...Figure 13.2 Pipeline approach on automatic and handcraft method.Figure 13.3 Gradient computation depending on kernel.Figure 13.4 SVM classifier for two features.
14 Chapter 14Figure 14.1 Artificial neural network.Figure 14.2 Layers of VGG16 and VGG19 network.Figure 14.3 The architecture of convolution neural network.Figure 14.4 Plots of Xception network. (a) Training loss of Xception network (b) accuracy of Xception network.Figure 14.5 Plots of AnimNet network. (a) Training loss of AnimNet network (b) accuracy of AnimNet networks.Figure 14.6 Accuracy of class “dog” (size of image = 200*200 pixels). (a) Using Xception network (b) using AnimNet network.Figure 14.7 Accuracy of class “cat” (size of image = 200*200 pixels). (a) Using Xception network (b) using AnimNet network.Figure 14.8 Accuracy of class “cow” (size of image = 200*200 pixels). (a) Using Xception network (b) using AnimNet network.Figure 14.9 Accuracy of class “horse” (size of image = 200*200 pixels). (a) Using Xception network (b) using AnimNet network.Figure 14.10 Accuracy of class “goat” (size of image = 200*200 pixels). (a) Using Xception network (b) using AnimNet network.Figure 14.11 Accuracy of class “monkey” (size of image = 200*200 pixels). (a) Using Xception network (b) using AnimNet network.
15 Chapter 15Figure 15.1 Basic architecture of proposed system.Figure 15.2 Filtering based on linguistic knowledge.Figure 15.3 Contextually similar terms of word “Punctual”.Figure 15.4 Teachers’ features with its importance.Figure 15.5 System performance in teachers’ and laptops’ feedbacks.
16 Chapter 16Figure 16.1 The three-phase proposed approach.Figure 16.2 Phrase extraction sequence.Figure 16.3 Map-reduce framework.Figure 16.4 Sample annotated document.Figure 16.5 Single context CBOW model.Figure 16.6 Generalized CBOW model.
17 Chapter 17Figure 17.1 Flowchart for image anonymization.Figure 17.2 GAN.Figure 17.3 Wasserstein distance.Figure 17.4 Image anonymization to prevent model inversion attack.Figure 17.5 Result over MNIST dataset.Figure 17.6 Comparing privacy gain for WGAN-GP and DCGAN.Figure 17.7 MNIST data distribution: left-hand figure shows anonymized data...
List of Tables
1 Chapter