Urban Remote Sensing. Группа авторов
Читать онлайн книгу.Firstly, a variety of remote sensors or systems have provided data essential for urban applications, which include not only some advanced ones, such as high‐resolution satellite systems, hyperspectral remote sensing, high‐resolution synthetic aperture radar (SAR), light detection and ranging (LIDAR), and nighttime satellite systems, but also several new and emergent ones, such as unmanned aerial systems (UASs) and social sensing (including street views). Inexpensive UAS (or drones) equipped with digital cameras (even with LIDAR units) and lightweight GPS units offer spatial flexibility in making sophisticated maps to support various urban applications (e.g. Kalantar et al., 2017; Khan et al., 2017; Dodge, 2018). Social sensing relies upon humans or mobile devices to collect “geotagged” information that can help improve image interpretation with additional human information (e.g. Jiang et al., 2016; Hu et al., 2016; Cai et al., 2017). Street views, such as the Google Street View (GSV) service publicly launched in 2007, offer street‐level imagery of city streetscapes that can help map urban tree cover and other features (e.g. Li et al., 2015b; Berland and Lange, 2017; Seiferling et al., 2017; Dodge, 2018).
Secondly, multi‐temporal analysis of remote sensor imagery has rapidly gained the popularity, which is essential for evolving beyond fast‐paced change detection (such as land conversion) and into monitoring of continuous land use activities with slower change rates (such as land modifications) using satellite time series. This represents an emergent research area in remote sensing of environment since early 2010s, given the availability of satellite imagery archives (e.g. Landsat and Sentinel data) at no charge (Wulder et al., 2019), the progression of advanced image processing infrastructures (such as high‐end computing systems and cloud computing platforms), and the increasing scientific need to understand continuous changes (e.g. Schneider, 2012; Li et al., 2015a; Fan et al., 2017; Huang et al., 2017; Zhu, 2017; Arévalo et al., 2019; Stokes and Seto, 2019; Zhu et al., 2019; Chen et al., 2020).
Thirdly, image merging or fusion in urban areas has been moving beyond pan‐sharpening and into other forms including multi‐sensor data merging, such as merging multispectral imagery with LIDAR point clouds (e.g. Meng et al., 2012) and merging optical images with SAR data (e.g. Errico et al., 2015); multi‐temporal data merging that combines images from the same area but at different dates to a composite (e.g. Schneider, 2012; Kabisch et al., 2019); spatial and temporal image fusion to generate a new data set with high spatial and temporal resolutions from an original dataset with high spatial but low temporal resolution and another dataset with low spatial but high temporal resolution (e.g. Chen et al., 2015; Wang and Atkinson, 2018); and merging of imagery with ancillary data that can improve image classification (e.g. Lai and Yang, 2020; Zhang and Yang, 2020).
Fourthly, the development of artificial intelligence beyond shallow learning algorithms and into deep learning models with many processing layers to learn representations of data with hierarchical abstraction can help discover complex structure in large remote sensor datasets (LeCun et al., 2015). Deep convolutional nets have brought about breakthroughs in image classification over complex urban areas (e.g. Maggiori et al., 2017; Sharma et al., 2017), whereas recurrent nets have demonstrated their effectiveness in processing satellite time series leading to improved performance in pattern recognition (e.g. Sharma et al., 2018). In addition, deep residual networks are easier to train comparing with deep convolutional networks and thus represent one of the most promising deep network architectures for image classification (He et al., 2016).
Fifthly, with more advanced pattern classifiers being used for urban feature extraction, there has been a trend moving beyond single classifiers and into multiple classifier systems (or classifier ensembles). Several relatively novel classifiers, such as support vector machines and random forests, are quite promising but by nature their performance may be compromised due to their incapability in accounting for the classification errors due to class ambiguity as a result of mixed pixels, within‐class variability, dynamic zones, transitional zones, and topographic shading (Smits, 2002). In contrast, multiple classifier systems can generate a better outcome for a classification task through combining a set of single classifiers (as base classifiers), assuming that an individual classifier does well at least over certain region in the feature space and leans to make independent prediction errors (see Du et al., 2012; Shi and Yang, 2017; Patidar and Keshari, 2018; Shen et al., 2018).
Sixthly, big data in terms of volume, variety, and velocity challenge data acquisition, storage, querying, sharing, analysis, visualization, updating, and information privacy. Over recent years, various cloud computing platforms have been developed to deal with these challenges. More specifically, cloud computing platforms are increasingly used to execute large‐scale spatial data processing and services. Google Earth Engine (GEE; https://earthengine.google.com/) and NASA Earth Exchange (NEX; https://c3.nasa.gov/nex/) are two most open cloud‐computing platforms supporting large‐scale Earth science data and analysis (e.g. Patel et al., 2015; Huang et al., 2017; Gorelick et al., 2017; Liu et al., 2018; Soille et al., 2018).
Lastly, integration of remote sensing and relevant geospatial data and technologies has supported a variety of innovative applications in urban areas, such as urban growth analysis (e.g. Huang et al., 2017), unplanned and informal settlement mapping (e.g. Kuffer et al., 2016), global urban settlement mapping (e.g. Corbane et al., 2017), urbanization impacts upon vegetation phenology (e.g. Zipper et al., 2016; Li et al., 2017), urban greenness and health (e.g. Mennis et al., 2018), urban heat island (UHI) and thermal sensing (e.g. Wang et al., 2016), urban climate (Johnson and Shepherd, 2018), urban hazards (e.g. Costanzo et al., 2016), urban planning (e.g. Norton et al., 2015), and urban sustainability (e.g. Bonafoni et al., 2017). There has been a trend in remote sensing applications that evolves beyond observing spatio‐temporal patterns and into analyzing socio‐environmental processes, and into pursuing towards urban sustainability (Seto et al., 2017).
1.3 OVERVIEW OF THE BOOK
With a total of 21 chapters, this book is divided into 4 major parts in addition to an introductory part: sensors and systems for urban areas; algorithms and techniques for urban attribute extraction; urban socioeconomic applications; and urban environmental applications. Each part consists of multiple chapters dedicated to specific topics.
1.3.1 SENSORS AND SYSTEMS FOR URBAN AREAS
With six major chapters, this part (Part II) discusses several advanced and emerging platforms or systems, such as unmanned aircraft systems and social sensing, which provide new opportunities advancing urban studies. It begins with Chapter 2 discussing an effort to examine urban built‐up volume through three‐dimensional analyses with lidar and radar data. It was motivated by the importance of the vertical dimension in urban built‐up areas but the lack of such information from conventional image‐based analyses. The authors used spaceborne radar data to monitor built‐up volume that was further validated with lidar data. They also discuss the future extension of high‐resolution multiple satellite SARs in quantifying urban build‐up volume.
Unmanned aircraft systems (UAS) platforms represent a new frontier of remote sensing applications. Chapter 3 discusses the utilities of UAS for urban remote sensing research. It introduces the concept of UAS, some common types of UAS models and cameras onboard, and a typical UAS data collection procedure. Several urban applications are discussed, along with a case study to demonstrate how UAS can be used for 3D mapping of urban structures. Finally, the authors discuss some major challenges of using UAS for urban studies, which are related to regulations, operations, and data processing.
Big geotagged‐data from mobile phones, social media, vehicle trajectories, and street views offer new opportunities for understanding human behaviors and characteristics of cities. The remaining four chapters in this part focus on social sensing. Chapter 4 reviews various analytical methods, such as temporal signature analysis, text