The Digital Agricultural Revolution. Группа авторов

Читать онлайн книгу.

The Digital Agricultural Revolution - Группа авторов


Скачать книгу
0.363 0.844 0.723 0.725 7 Thotlavalluru 0.687 0.439 0.844 0.686 0.450 8 Avanigada 0.563 0.454 0.785 0.359 0.000 9 Pamidimukkala 0.575 0.687 0.791 0.302 0.900 10 Guduru 0.459 0.767 0.735 0.000 0.915 11 Penamaluru 0.478 0.706 0.744 0.063 0.525 12 Koduru 0.748 0.482 0.874 0.821 0.900 13 Pamarru 0.771 0.390 0.885 0.929 0.619 14 Machilipatnam 0.533 0.452 0.771 0.284 0.921 15 Pedana 0.491 0.203 0.750 0.266 0.957 16 Mopidevi 0.496 0.790 0.752 0.080 0.544 17 Nagayalanka 0.511 0.521 0.760 0.203 0.750

      Table 2.2 represents the maximum and minimum values of the normalized data. The average values of NDVI ranged from 0.459 to 0.687 for different mandals. Similarly, APAR and CWSI ranged from 0.750 to 0.874 and 0 to 0.929, respectively. Surface temperature and average yield were normalized. The normalized surface temperature was in the range of 0.148 for Kankipadu to 0.831 at Vijayawada rural. The highest normalized yield is at 0.957 at Pedana. The point wise normalized parameter values were used as input to the NN model.

      2.6.1 Materials and Methods

       2.6.1.1 ANN Model Design

      The hidden layer has a different number of hidden neurons and is tested for optimum number of neurons. The optimum number of neurons in hidden layer and parameters of the model is determined by trial and error method. Wij is the connecting weight between ith input layer neuron to the jth hidden layer neuron. The Vjk is weight between the jth hidden layer neuron and the kth output layer neuron (in this case k=1). Momentum and learning rate are two main parameters for training, which takes care of steepest-descent convergence [71]. The final weighting factors are used to simulate relationship between crop yield and corresponding crop growth factors. The final weighting factors generated by the network trained model are saved for estimation of new data. The hidden layer neurons were varied between 1 and 30 in the developed models. Sigmoidal transfer function and linear activation functions are in hidden output layers. The code to develop the neural network is written in MATLAB programming language package.

      The hidden layer receives data from the input neurons layer. In the hidden layer, inputs are multiplied by suitable weights and sums. The sigmoid transfer function was activated before the output layer. Mathematical expression linear transfer function is

      (2.5)image

      The output y is expressed as:

      (2.6)image

      where f is neuron activation or transfer function. The transfer function of each neuron in the network is a sigmoid transfer function and is given as

      (2.7)image


Скачать книгу