Change Detection and Image Time Series Analysis 2. Группа авторов

Читать онлайн книгу.

Change Detection and Image Time Series Analysis 2 - Группа авторов


Скачать книгу
Bare soil Containers Pléiades only 100% 61.66% 81.69% 82.82% 56.72% 76.57% Pléiades and CS 100% 44.32% 83.54% 74.75% 49.12% 70.34% Pléiades and RS 92.56% 44.85% 79.85% 78.62% 42.15% 67.60% Proposed method 90.79% 91.45% 82.59% 81.02% 54.85% 80.14%

      In order to explore the capability of the technique to benefit from the synergy of the VNIR optical, X-band radar and C-band radar imagery in the input series in more detail, the aforementioned results were compared to those achieved when (i) only the Pléiades data were used for classification or (ii) the Pléiades data were used in conjunction with only one of the two SAR images (see Table 1.2 and Figures 1.7 and 1.8(d)–(f)). In all such cases, the same classification scheme based on quad-trees, MPM and FMM, as in the proposed method, was applied. We omit the results obtained when only the two SAR images in the series were used because they corresponded to low accuracy values – an expected result in the case of the classification of the aforementioned classes solely with a short series of two SAR images.

      The results in Table 1.2 confirm that jointly exploiting all three satellite data sources made it possible to remarkably achieve higher accuracies than when only a subset of these sources was used. When only the Pléiades image was employed, the “water”, “vegetation” and “bare soil” classes were discriminated quite accurately but the “urban” class was not. When the second proposed method was applied to these VNIR data, together with both the COSMO-SkyMed and the RADARSAT-2 data, the enhancement in the discrimination of the “urban” class was approximately +30%. Furthermore, in this case, the results were more accurate than those generated by jointly using the Pléiades image along with only one of the two SAR images in the series. This scenario suggests the capability of the second proposed method to benefit from a multimission time series, including multifrequency and multiresolution imagery from current satellite instruments at very high spatial resolution. A drawback in the results of the proposed algorithm was the lower accuracy for the “water” class, compared to when only the Pléiades data or Pléiades and COSMO-SkyMed imagery were employed. The “water” class exhibits a significant texture in the RADARSAT-2 image (see Figure 1.7(c)), and the proposed algorithm does not involve any texture analysis component. The impact of this issue is limited, as the “water” class is discriminated by the proposed algorithm with an accuracy of around 91%. However, extending the method by integrating texture extraction appears to be a promising possible generalization.

      In this chapter, the problem of the generation of a classification map from an input time series composed of multisensor multiresolution remote sensing images has been discussed. First, the related literature in the area of remote sensing data fusion has been reviewed. Then, an advanced approach based on multiple quad-trees in cascade has been described. It derives from the multisensor generalization of a previous technique focused on the time series of single-sensor data, and addresses the challenging problem of multisensor, multifrequency and multiresolution fusion for classification purposes.

      In the framework of this approach, two algorithms have been developed for two different multimodal fusion objectives. In the first one, the general task of jointly classifying a time series of multisensor multiresolution imagery is considered. In the second one, the focus is on the special case of the fusion of multimission data acquired by COSMO-SkyMed, RADARSAT-2 and Pléiades. In the case of both techniques, the fusion task is formalized in the framework of hierarchical probabilistic graphical models – most remarkably hierarchical MRF on cascaded quad-trees. Inference and parameter estimation are addressed through the maximum posterior mode criterion and FMM, respectively.

      Examples of experimental results provided by the two proposed algorithms have been shown with regard to high- or very-high-resolution time series associated with case studies in Port-au-Prince, Haiti. The results have suggested that the described algorithms successfully benefit from the data sources in the input multisensor time series, improving the classification result, compared to those obtained using single-mission, single-scale or previous methods in terms of classification accuracy, computation time or spatial regularity.

      A major property of the proposed hierarchical Markovian framework is its flexibility. The graphical architecture associated with multiple quad-trees in cascade allows the incorporation of input image sources associated with different sensors, acquisition times and spatial resolutions – jointly. Relevant extensions of this framework may involve the combination with spatial–contextual models within each layer of the quad-trees, or with the intrinsically multiscale structure of CNN (Goodfellow et al. 2016).

      The authors would like to thank the Italian Space Agency (ASI), the French Space Agency (CNES) and the Canadian Space Agency (CSA) for providing COSMO-SkyMed, Pléiades and RADARSAT-2 data, respectively. The COSMO-SkyMed and RADARSAT-2 images were procured in the context of the SOAR-ASI 5245 project, framed within the joint ASI-CSA announcement of opportunity.

      ALEjaily, A., El Rube, I., Mangoud, M. (2008). Fusion of remote sensing images using contourlet transform. Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering, pp. 213–218.

      Basseville, M., Benveniste, A., Willsky, A. (1992a). Multiscale autoregressive processes I. Schur–Levinson parametrizations. IEEE Transactions on Signal Processing, 40(8), 1915–1934.

      Basseville, M., Benveniste, A., Willsky, A. (1992b). Multiscale autoregressive processes II. Lattice structures for whitening and modeling. IEEE Transactions on Signal Processing, 40(8), 1935–1954.

      Benedetti, P., Ienco, D., Gaetano, R., Ose, K., Pensa, R.G., Dupuy, S. (2018). M3fusion: A deep learning architecture for multiscale multimodal multitemporal satellite data fusion. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 11(12), 4939–4949.

      Benz, U. (1999). Supervised fuzzy analysis of single and multichannel SAR data. IEEE Transactions on Geoscience and Remote Sensing, 37(2), 1023–1037.

      Berthod, M., Kato, Z., Yu, S., Zerubia, J. (1996). Bayesian image classification using Markov random fields. Image and Vision Computing, 14(4), 285–295.

      Bouman, C. (1991). A multiscale image model for Bayesian image segmentation. Thesis, Purdue University, School of Electrical Engineering.

      Brunner, D., Lemoine, G., Bruzzone, L. (2010). Earthquake damage assessment of buildings using VHR optical and SAR imagery. IEEE Transactions on Geoscience and Remote Sensing, 48(5), 2403–2420.

      Bruzzone, L., Prieto, D.F., Serpico, S. (1999). A neural-statistical approach to multitemporal and


Скачать книгу