Applications of deep learning and multi-perspective 2D/3D imaging streams for remote terrain characterization of coastal environments

Date

2021-12

Authors

Pashaei, Mohammad

Journal Title

Journal ISSN

Volume Title

Publisher

DOI

Abstract

Threats from storms, sea encroachment, and growing population demands put coastal communities at the forefront of engineering and scientific efforts to reduce vulnerabilities for their long-term prosperity. Updated and accurate geospatial information about land cover and elevation (topography) is necessary to monitor and assess the vulnerability of natural and built infrastructure within coastal zones. Advancements in remote sensing (RS) and autonomous systems extend surveying and sensing capabilities to difficult environments, enabling more geospatial data acquisition flexibility, higher spatial resolutions, and allowing humans to “see” in ways previously unattainable. Recent years have witnessed enormous growth in the application of small, unmanned aircraft systems (UASs) equipped with digital cameras for hyperspatial resolution imaging and dense three-dimensional (3D) mapping using structure-from-motion (SfM) photogrammetry techniques. In contrast to photogrammetry, light detection and ranging (lidar) is an active RS technique that uses a pulsed laser mounted on a static or mobile platform (from air or land) to scan in high definition the 3D structure of a scene. Rapid proliferation in lidar technology has resulted in new scanning and imaging modalities with ever increasing capabilities such as geodetic-grade terrestrial laser scanning (TLS) with ranging distances of up to several kilometers from a static tripod. TLS enables 3D sampling of the vertical structure of occluding objects, such as vegetation, and underlying topography. Full waveform (FW) lidar systems have led to a significant increase in the level of information extracted from a backscattered laser signal returned from a scattering object. With this technological advance and increase in remote sensing capabilities and data resolution, comes an increase in information gain at the cost of highly more complex and challenging big data sets to process and extract meaningful information. In this regard, utilizing end-to-end analyzing techniques recently developed in artificial intelligence (AI), in particular, convolutional neural network (CNN), developed under deep learning (DL) framework, seems applicable. DL techniques have recently outperformed state-of-the-art analysis techniques in a wide range of applications including RS. This work presents the application of DL for efficient exploitation of hyperspatial UAS-SfM photogrammetry and FW TLS data for land cover monitoring and topographic mapping in a coastal zone. Hyperspatial UAS images and TLS point cloud data with additional information about the scattering properties of illuminated target in the footprint of the laser beam encoded in returned waveform signals provide valuable geospatial data resources to uncover the accurate 3D structure of the surveyed environment. This study presents three main contributions: 1) Evaluation of different DCNN architectures, and their efficiencies, to classify land cover within a complex wetland setting using UAS imagery is investigated; 2) DCNN-based single image super-resolution (SISR) is employed as a pre-processing technique on low-resolution UAS images to predict higher resolution images over coastal terrain with natural and built land cover, and its effectiveness for enhancing dense 3D scene reconstruction with SfM photogrammetry is tested; 3) Full waveform TLS data is employed for point cloud classification and ground surface detection in vegetation using a developed DCNN framework that works directly off of the raw, digitized echo waveforms. Results show that returned raw waveform signals carry more information about a target’s spatial and radiometric properties in the footprint of the laser beam compared to waveform attributes derived from traditional waveform processing techniques. Collectively, this study demonstrates useful information retrieval from hyperspatial resolution 2D/3D RS data streams in a DL analysis framework.

Description

Keywords

Classification, convolutional neural network, Full-waveform Lidar, remote sensing, Super Resolution, Unmanned Aircraft Systems

Sponsorship

Rights:

This material is made available for use in research, teaching, and private study, pursuant to U.S. Copyright law. The user assumes full responsibility for any use of the materials, including but not limited to, infringement of copyright and publication rights of reproduced materials. Any materials used should be fully credited with its source. All rights are reserved and retained regardless of current or future development or laws that may apply to fair use standards. Permission for publication of this material, in part or in full, must be secured with the author and/or publisher.

Citation