Publications

Shang, C; Jiang, S; Ling, F; Li, XD; Zhou, YD; Du, Y (2023). Spectral-Spatial Generative Adversarial Network for Super-Resolution Land Cover Mapping With Multispectral Remotely Sensed Imagery. IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 16, 522-537.

Abstract
Super-resolution mapping (SRM) can effectively predict the spatial distribution of land cover classes within mixed pixels at a higher spatial resolution than the original remotely sensed imagery. The uncertainty of land cover fraction errors within mixed pixels is one of the most important factors affecting SRM accuracy. Studies have shown that SRM methods using deep learning techniques have significantly improved land cover mapping accuracy but have not coped well with spectral-spatial errors. This study proposes an end-to-end SRM model using a spectral-spatial generative adversarial network (SGS) with the direct input of multispectral remotely sensed imagery, which deals with spectral-spatial error. The proposed SGS comprises the following three parts: first, cube-based convolution for spectral unmixing is adopted to generate land cover fraction images. Second, a residual-in-residual dense block fully and jointly considers spectral and spatial information and reduces spectral errors. Third, a relativistic average GAN is designed as a backbone to further improve the super-resolution performance and reduce spectral-spatial errors. SGS was tested in one synthetic and two realistic experiments with multi/hyperspectral remotely sensed imagery as the input, comparing the results with those of hard classification and several classic SRM methods. The results showed that SGS performed well at reducing land cover fraction errors, reconstructing spatial details, removing unpleasant and unrealistic land cover artifacts, and eliminating false recognition.

DOI:
10.1109/JSTARS.2022.3228741

ISSN:
2151-1535