4A.4 Sematic Segmentation of Hyperspatial UAS Imagery for Land Cover Mapping Using Convolutional-Wavelet Neural Networks

Tuesday, 8 January 2019: 9:15 AM
North 124B (Phoenix Convention Center - West and North Buildings)
Mohammad Pashaei, Texas A&M University-Corpus Christi, corpus christi, TX; and M. J. Starek

Land cover classification has been a long-standing research area in remote sensing. In recent years, small unmanned aircraft systems (UAS) equipped with digital cameras have enabled the acquisition of hyperspatial imagery with a ground sample distance (GSD) less than 3 cm. As a result, appearance-based image features play as important a role as the spectral content. In this work, Fully Convolutional Neural Networks (FCNs) are applied to semantic-labeling of UAS aerial images collected over a coastal marsh environment. We believe that the most effective methods for dense and accurate semantic segmentation of hyperspatial aerial imagery over complex environments such as wetland areas are those that combine both Convolutional Neural Networks (CNNs) features and handcrafted multi-scale features. This work uses a wavelet transform to extract multi-scale features from image patches, which in combination with features automatically extracted through the CNN, produces a more robust rotation and translation invariant multi-scale feature set that is used by a Conditional Random Field (CRF), as a post-processing step, to perform scene segmentation. Our experimental results demonstrate an increase in accuracy in comparison to our earlier proposed end-to-end learning FCN for UAS hyperspatial image classification over wetland areas.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner