Publications

Li, WS; Yang, C; Peng, YD; Zhang, XY (2021). A Multi-Cooperative Deep Convolutional Neural Network for Spatiotemporal Satellite Image Fusion. IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 14, 10174-10188.

Abstract
Remote sensing satellite images with high temporal and high spatial resolution play a critical role in earth science applications. However, it is difficult for a single satellite to obtain such images due to technical and cost constraints. Therefore, spatiotemporal image fusion based on deep learning has received extensive attention in recent years. This article proposes a multicooperative deep convolutional neural network (MCDNet) for spatiotemporal satellite image fusion. This method is a new multinetwork model in which multiple networks work together to reconstruct the predicted image. The multinetwork model consists of a super-resolution network, a difference reconstruction network, and a collaborative training network. First, the super-resolution network uses the combination of a novel multiscale mechanism and dilated convolutions to make full use of the spectral information of the coarse image and upgrade it to a transitional image that matches the fine image. The difference reconstruction network uses structural relevance to complete the reconstruction of the fine difference image. The collaborative training network extracts the hidden information from the fine image and uses the time relevance to restrict the training of the difference reconstruction network. Finally, the fine difference image and the known fine image are combined to complete the image fusion. The new compound loss function can help multinetwork models better complete cooperative training. Through experiments on two datasets and comparison with existing fusion algorithms, the subjective and objective results prove that MCDNet can effectively reconstruct higher-quality prediction images.

DOI:
10.1109/JSTARS.2021.3113163

ISSN:
1939-1404