Publications

Sun, WW; Ren, K; Meng, XC; Yang, G; Liu, Q; Zhu, L; Peng, JT; Li, JC (2024). Generating high-resolution hyperspectral time series datasets based on unsupervised spatial-temporal-spectral fusion network incorporating a deep prior. INFORMATION FUSION, 111, 102499.

Abstract
Over the past decade, image fusion has emerged as an indispensable tool for surface monitoring due to its capability to reconstruct high-quality surface reflectance. While satisfactory progress has been made in the spatial-spectral and spatial-temporal fusion of remote sensing images, the fusion of spatial-temporal-spectral data remains a challenging task. This challenge arises from the presence of multiple nonlinear relationships caused by inconsistent temporal, spatial, and spectral resolutions. In this paper, we propose a novel approach to address the spatial-temporal-spectral fusion of remote sensing images using a triple discriminator Generative Adversarial Network called TDGAN. Our approach represents the first attempt at spatial-temporal-spectral fusion based on deep learning techniques. Specifically, TDGAN achieves unsupervised learning by incorporating a deep spectral transformation prior, explores the application performance of the Swin Transformer in spectral super-resolution for the first time. Additionally, three discriminators are designed based on the prior consistencies: phenological consistency for data acquired at different times and spatial-spectral consistency for data acquired at the same time. These discriminators effectively enhance the prediction accuracy of missing hyperspectral images. Extensive simulations and real experiments were conducted to evaluate the effectiveness of our proposed TDGAN. The results demonstrate its robustness and comparable performance to other state-of-the-art methods. The proposed approach provides high-quality hyperspectral datasets with high spatial and temporal resolutions, facilitating continuous land cover observations.

DOI:
10.1016/j.inffus.2024.102499

ISSN:
1872-6305