Publications

Fang, S; Meng, SY; Zhang, J; Cao, Y (2022). Two-stream spatiotemporal image fusion network based on difference transformation. JOURNAL OF APPLIED REMOTE SENSING, 16(3), 38506.

Abstract
For satellite imaging instruments, the tradeoff between spatial and temporal resolution leads to the spatial-temporal contradiction of image sequences. Spatiotemporal image fusion (STIF) provides a solution to generate images with both high-spatial and high-temporal resolutions, thus expanding the applications of existing satellite images. Most deep learning-based STIF methods throw the task to network as a whole and construct an end-to-end model without caring about the intermediate physical process. This leads to high complexity, less interpretability, and low accuracy of the fusion model. To address this problem, we propose a two-stream difference transformation spatiotemporal fusion (TSDTSF), which includes transformation and fusion streams. In the transformation stream, an image difference transformation module reduces the pixel distribution difference of images from different sensors with the same spatial resolution, and a feature difference transformation module improves the feature quality of low-resolution images. The fusion stream focuses on feature fusion and image reconstruction. The TSDTSF shows superior performance in accuracy, vision quality, and robustness. The experimental results show that TSDTSF achieves the effect of the average coefficient of determination (R-2 = 0.7847) and the root mean square error (RMSE = 0.0266), which is better than the suboptimal method average (R-2 = 0.7519) and (RMSE = 0.0289). The quantitative and qualitative experimental results on various datasets demonstrate our superiority over the state-of-the-art methods. (C) 2022 Society of Photo-Optical Instrumentation Engineers (SPIE)

DOI:
10.1117/1.JRS.16.038506

ISSN:
1931-3195