Publications

Ao, ZR; Sun, Y; Pan, XY; Xin, QC (2022). Deep Learning-Based Spatiotemporal Data Fusion Using a Patch-to-Pixel Mapping Strategy and Model Comparisons. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 60, 5407718.

Abstract
Tradeoffs among the spatial, spectral, and temporal resolutions of satellite sensors make it difficult to acquire remote sensing images at both high spatial and high temporal resolutions from an individual sensor. Studies have developed methods to fuse spatiotemporal data from different satellite sensors, and these methods often assume linear changes in surface reflectance across time and adopt empirical rules and handcrafted features. Here, we propose a dense spatiotemporal fusion (DenseSTF) network based on the convolutional neural network (CNN) to deal with these problems. DenseSTF uses a patch-to-pixel modeling strategy that can provide abundant texture details for each pixel in the target fine image to handle heterogeneous landscapes and models both forward and backward temporal dependencies to account for land cover changes. Moreover, DenseSTF adopts a mapping function with few assumptions and empirical rules, which allows for establishing reliable relationships between the coarse and fine images. We tested DenseSTF in three contrast scenes with different degrees of heterogeneity and temporal changes, and made comparisons with three rule-based fusion approaches and three CNNs. Experimental results indicate that DenseSTF can provide accurate fusion results and outperform the other tested methods, especially when the land cover changes abruptly. The structure of the deep learning networks largely impacts the success of data fusion. Our study developed a novel approach based on CNN using a patch-to-pixel mapping strategy and highlighted the effectiveness of the deep learning networks in the spatiotemporal fusion of the remote sensing data.

DOI:
10.1109/TGRS.2022.3154406

ISSN:
1558-0644