Urban Remote Sensing. Группа авторов
Читать онлайн книгу.data collection flights, but the three discussed here arguably have the most significant direct impact on the data quality (Mesas‐Carrascosa et al., 2016).
Image overlap refers to the amount each image taken by a UAS along its flight path overlaps with adjacent images. Most aerial remote sensing flight designs utilize a grid‐pattern design that enables operators to designate a percentage frontal image overlap and a side overlap. Image overlap directly affects the quality of data outputs regardless of which specific data outputs are created (see Section 3.3.3). Larger image overlap is usually associated with higherquality data output but also requires longer computational time to generate the output. It is generally recommended to utilize longitudinal and lateral image overlap of 60–90% depending on the mapping purposes and the computational resources available (Torres‐Sánchez et al., 2018). Figure 3.1 presents an example flight plan design and depiction of image overlap.
FIGURE 3.1 As flying along the gridded flight path, UAS collects overlapping images that can be used to generate a point cloud of common feature points. A larger image overlap yields more matches of common feature points but can significantly increase the flight duration.
Source: Based on Torres et al. (2018).
Another important factor for autonomous flight plan design is image acquisition altitude. The altitude that a UAS flies at during a data collection mission can dramatically affect the amount of time it takes to complete a mission, the amount of area that can be covered, and the resolution of its sensors. Lower flight altitudes allow the UAS to collect data with higher resolution but require longer flight times as the UAS must fly much further to maintain the minimum image overlap required for reconstruction. Inversely, if a UAS is flown at a higher altitude, the spatial resolution of the output imagery becomes coarser, but the flights take less time to cover the same amount of area. According to Agüera‐Vega et al. (2017), the accuracy of DSMs and orthophotos derived from UAS images tends to improve with lower acquisition altitudes. But it takes much longer to complete the image acquisition and the output data are more computationally intensive to process. In addition, there can be physical limitations affecting the decision of appropriate flight altitudes for a project. If there are tall buildings (such as in an urban setting), the flights must be flown high enough to not collide during the mapping mission. Changes in terrain can also affect the flight altitude because the terrain elevation can change in relation to the UAS position when initially taking off.
The third important factor influencing the quality of UAS data outputs is sensor orientation. During the early mission planning process when an operator is thinking about the intended application and goals for a project, a simple decision to be made is whether 3D data outputs are needed. This is important because image orientation can affect the quality of a 3D reconstruction significantly. It is common for operators to utilize nadir image orientations for most mapping purposes but image datasets acquired at oblique angles generally work better for constructing 3D datasets, such as point clouds, textured models, and DEMs (Küng et al., 2011; Chiabrando et al., 2017; Boonpook et al., 2018). The inclusion of oblique image orientation techniques allows the facades of tall features to be better captured in the image dataset, thus resulting in a more thoroughly captured dataset. However, capturing a complete oblique orientation dataset takes much longer than acquiring a nadir orientation dataset and is also more computationally intensive. Due to the angled perspective of oblique image datasets, double‐grid flight patterns are commonly used to capture all four sides of any features present. Since image overlap is crucial for reconstructing the image datasets into 2D and 3D data outputs, extra attention must be paid by the UAS operator when designing a flight plan with image orientations other than nadir.
3.3.2 FLIGHT OPERATIONS (IN‐FLIGHT)
The process of in‐flight operations is much simpler than preflight mission planning. Since much of remote sensor data collection with a UAS is performed using an autonomous or assisted flight mode, there tends to be more user input required during the preflight stage as opposed to the actual in‐flight stage. Once an operator has assessed all relevant factors during the mission planning stage and created a flight plan for a specific project, the rest of the actual data collection process is quite simple. Assuming the operator is utilizing an autonomous flight plan design for mapping purposes, there are no direct inputs required from the operator after the launch, and the operator should focus on maintaining line‐of‐sight and ensuring safety. If the flight plan requires the UAS to change batteries due to the area being mapped becomes too large to cover with one battery, the operator might have to pause the flight to perform a battery swap and then resume. Battery limitations are frequently cited as a significant logistical challenge to utilizing UAS for remote sensing purposes, especially when performing autonomous mapping missions (Dong et al., 2018; Boukoberine et al., 2019; Mazur and Domanski, 2019). After the UAS finishes the flight plan and lands, the operator should download and organize the data. It is generally recommended to check the data while still on‐site in case that any of the images collected are of poor quality and need to be recollected.
An exception to the simplicity of this stage of data collection is when high accuracy ground control validation is required for a project. For many projects, especially those where locational data are not needed, image georeferencing is not required. However, many UAS utilize Global Navigation Satellite System (GNSS) receivers providing positional information in real‐time to the operator. The inclusion of GNSS receivers in UAS also enables the imagery collected by the onboard sensors to be georeferenced. This process of image georeferencing requires the GNSS receiver to measure the coordinates of the UAS at the exact moment an image was collected. Due to the high speed of the UAS and image collection process, it is often difficult for the GNSS receiver to perfectly synchronize with the camera. According to Sanz‐Ablanedo et al. (2018), the method of georeferencing images with the onboard GNSS receiver can only achieve accuracies in the decimeter to meter range. These accuracies can be improved, however, with the inclusion of independent ground control points (GCPs) in the 3D dataset. For instance, Turner et al. (2012) demonstrated that when utilizing only the UAS’s onboard GNSS for georeferencing, the mean absolute total error of their two study locations was 1.247 and 0.665 m, respectively. But when independent GCPs were included in the same image datasets, the location error dropped to 0.129 and 0.103 m, respectively. This suggests that including independent GCPs can significantly improve locational accuracies in 2D and 3D datasets (Cryderman et al., 2014; Agüera‐Vega et al., 2017; Liu et al., 2018). There has also been more research conducted in recent years on the effectiveness of the integration of real‐time kinematic (RTK) and post‐processing kinematic (PPK) technology with GNSS receivers onboard UAS, and some researchers demonstrated the possibility of obtaining sub‐decimeter spatial accuracies without using independent GCPs (Forlani et al., 2018; Gabrlik et al., 2018; Zhang et al., 2019). The decision of whether independent GCPs (or a high‐quality onboard GNSS receiver) should be included in a remote sensing project depends on the accuracy required for the project. Not all projects require sub‐decimeter accuracies, so it might be more feasible (and cost‐effective) for individuals to utilize UAS with only standard quality GNSS receivers that provide lower positional accuracies.
3.3.3 DATA PROCESSING (POSTFLIGHT)
Once all image data have been collected and organized from a UAS, they can be processed into other geospatial products depending on the sensor type and flight plans used. For a RGB sensor, individuals can utilize the images collected to generate a multitude of photogrammetric outputs, such as orthophotos (Strecha et al., 2012; Hardy et al., 2017), DEMs (Kršák et al., 2016; Agüera‐Vega et al., 2017), and 3D point clouds (Siebert and Teizer, 2014;