Publications

Zhang, HY; Song, YY; Han, C; Zhang, LP (2021). Remote Sensing Image Spatiotemporal Fusion Using a Generative Adversarial Network. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 59(5), 4273-4286.

Abstract
Due to technological limitations and budget constraints, spatiotemporal fusion is considered a promising way to deal with the tradeoff between the temporal and spatial resolutions of remote sensing images. Furthermore, the generative adversarial network (GAN) has shown its capability in a variety of applications. This article presents a remote sensing image spatiotemporal fusion method using a GAN (STFGAN), which adopts a two-stage framework with an end-to-end image fusion GAN (IFGAN) for each stage. The IFGAN contains a generator and a discriminator in competition with each other under the guidance of the optimization function. Considering the huge spatial resolution gap between the high-spatial, low-temporal (HSLT) resolution Landsat imagery and the corresponding low-spatial, high-temporal (LSHT) resolution MODIS imagery, a feature-level fusion strategy is adopted. Specifically, for the generator, we first super- resolve the MODIS images while also extracting the high-frequency features of the Landsat images. Finally, we integrate the features from the MODIS and Landsat images. STFGAN is able to learn an end-to-end mapping between the Landsat-MODIS image pairs and predicts the Landsat-like image for a prediction date by considering all the bands. STFGAN significantly improves the accuracy of phenological change and land-cover-type change prediction with the help of residual blocks and two prior Landsat-MODIS image pairs. To examine the performance of the proposed STFGAN method, experiments were conducted on three representative Landsat-MODIS data sets. The results clearly illustrate the effectiveness of the proposed method.

DOI:
10.1109/TGRS.2020.3010530

ISSN:
0196-2892