Li, WS; Zhang, XY; Peng, YD; Dong, ML (2020). DMNet: A Network Architecture Using Dilated Convolution and Multiscale Mechanisms for Spatiotemporal Fusion of Remote Sensing Images. IEEE SENSORS JOURNAL, 20(20), 12190-12202.

Since remote sensing images cannot have both high temporal resolution and high spatial resolution, spatiotemporal fusion of remote sensing images has attracted increasing attention in recent years. Additionally, with the successful application of deep learning in various fields, spatiotemporal fusion algorithms based on deep learning have also gradually diversified. We propose a network framework that is based on deep convolutional neural networks that incorporate dilated convolution and multiscale mechanisms, we refer to this network framework as DMNet. In this method, we concatenate the feature maps that need to be fused to avoid using complex fusion methods to introduce noise. Then, the multiscale mechanism can extract the context information of the image at various scales, and make the image details more abundant. By adding skip connections, feature maps in shallow convolutional layers can be obtained to avoid losing important features of the image during the convolution. Additionally, dilated convolution expands the receptive field of the convolution kernel, which is conducive to the extraction of small detail features. To evaluate the robustness of our method, we conduct experiments on two datasets and compare the results with those obtained by six representative spatiotemporal fusion methods. Both intuitive and objective results demonstrate the superior performance of our method.