Song, HH; Liu, QS; Wang, GJ; Hang, RL; Huang, B (2018). Spatiotemporal Satellite Image Fusion Using Deep Convolutional Neural Networks. IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 11(3), 821-829.
Abstract
We propose a novel spatiotemporal fusion method based on deep convolutional neural networks (CNNs) under the application background of massive remote sensing data. In the training stage, we build two five-layer CNNs to deal with the problems of complicated correspondence and large spatial resolution gaps between MODIS and Landsat images. Specifically, we first learn a nonlinear mapping CNN between MODIS and low-spatial-resolution (LSR) Landsat images and then learn a super-resolution CNN between LSR Landsat and original Landsat images. In the prediction stage, instead of directly taking the outputs of CNNs as the fusion result, we design a fusion model consisting of high-pass modulation and a weighting strategy to make full use of the information in prior images. Specifically, we firstmap the input MODIS images to transitional images via the learned nonlinear mapping CNN and further improve the transitional images to LSR Landsat images via the fusion model; then, via the learned SR CNN, the LSR Landsat images are supersolved to transitional images, which are further improved to Landsat images via the fusion model. Compared with the previous learning-based fusion methods, mainly referring to the sparse-representation-based methods, our CNNs-based spatiotemporalmethod has the following advantages: 1) automatically extracting effective image features; 2) learning an end-to-end mapping between MODIS and LSR Landsat images; and 3) generating more favorable fusion results. To examine the performance of the proposed fusion method, we conduct experiments on two representative Landsat-MODIS datasets by comparing with the sparse-representation-based spatiotemporal fusion model. The quantitative evaluations on all possibleprediction dates and the comparison of fusion results on one key date in both visual effect and quantitative evaluationsdemonstrate that the proposed method can generate more accurate fusionresults.
DOI:
10.1109/JSTARS.2018.2797894
ISSN:
1939-1404