Li, L; Liu, P; Wu, J; Wang, LZ; He, GJ (2020). Spatiotemporal Remote-Sensing Image Fusion With Patch-Group Compressed Sensing. IEEE ACCESS, 8, 209199-209211.

Generally, it is difficult to acquire remote sensing data whose resolution is both highly spatial and highly temporal from a single satellite. In this paper, a novel compressed sensing (CS)-based spatiotemporal data fusion (CSBS) method is proposed to synthesize such high-spatiotemporal resolution images. With CSBS, a low-spatial resolution remote senisng image is treated as a sampling of the high-spatial resolution image. The down-sampling in the spatial domain of images is modeled as a CS measurement matrix in CSBS. Moreover, continuity constraints in the temporal domain are also introduced into the CSBS object function for CS reconstruction. To better represent the intrinsic features of the data, images are segmented into many small patches and clustered into several groups via K-means. Dictionary training, measurement matrix identification, and high-resolution prediction are carried out group-by-group. Based on features learned from patch groups, the transformational relationship between spatial-temporal images having different resolutions are easily identified. Compared with previous compressed sensing and dictionary learning methods, CSBS is characterized by: (1) the patch-group stratagem in dictionary learning and measurement matrix learning; (2) the combination of continuity in temporal domain and sparsity in spatial domain. The proposed method is then comprehensively compared with different methods using land-surface reflectance data. Experiment results validate the effectiveness and advancement of CSBS for spatiotemporal data fusion.