Publications

Zhu, ZS; Tao, YX; Luo, XB (2022). HCNNet: A Hybrid Convolutional Neural Network for Spatiotemporal Image Fusion. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 60, 2005716.

Abstract
In recent years, leaps and bounds have developed spatiotemporal fusion (STF) methods for remote sensing (RS) images based on deep learning. However, most existing methods use 2-D convolution (Conv) to explore features. 3-D Conv can explore time-dimensional features, but it requires more memory footprint and is rarely used. In addition, the current STF methods based on convolutional neural networks (CNNs) are mainly the following two: 1) use 2-D Conv to extract features from multiple bands of the input image together and fuse the features to predict the multiband image directly and 2) use 2-D Conv to extract features from individual bands of the image, predict the reflectance data of individual bands, and finally stack the predicted individual bands directly to synthesize the multiband image. The former method does not sufficiently consider the spectral and reflectance differences between different bands, and the latter does not consider the similarity of spatial structures between adjacent bands and the spectral correlation. To solve these problems, we propose a 2-D/3-D hybrid CNN called HCNNet, in which the 2D-CNN branch extracts the spatial information features of single-band image, and the 3D-CNN branch extracts spatiotemporal features of single-band images. After fusing the features of the dual branches, we introduce neighboring band features to share spatial information so that the information is complementary to obtain single-band features and images, and finally stack each single-band image to generate multiband images. Visual assessment and metric evaluation of the three publicly available datasets showed that our method predicted better images compared with the five methods.

DOI:
10.1109/TGRS.2022.3177749

ISSN:
1558-0644