Publications

Erdem, F; Avdan, U (2023). STFRDN: a residual dense network for remote sensing image spatiotemporal fusion. INTERNATIONAL JOURNAL OF REMOTE SENSING, 44(10), 3259-3277.

Abstract
Convolutional Neural Networks (CNN) are useful models for spatiotemporal fusion, especially under strong temporal changes. Convolutional layers with receptive fields of various sizes produce hierarchical features that can improve prediction performance. In this study, we aimed to develop a CNN model for spatiotemporal fusion, which can extract hierarchical features and fully use them. The proposed network is composed of residual dense blocks with local dense connections that effectively utilize the hierarchical features. We conducted the experiments in the newly proposed Kansas dataset, which includes Sentinel-2 and Sentinel-3 image pairs with large resolution differences and strong temporal changes. Based on both quantitative and qualitative evaluation, experiments revealed that the STFRDN model outperformed the Flexible Spatiotemporal DAta Fusion (FSDAF) 2.0, Reliable and Adaptive Spatiotemporal Data Fusion (RASDF), Fit-FC, and DMNet methods in all of the test groups. The second major finding was that the STFRDN model was able to produce sufficiently successful predictions against cases with such large resolution differences. To get more accurate prediction results when there is a significant resolution difference, it is important to extract the right features that can model the discrepancy. Loss of spatial detail can be avoided by making full use of hierarchical features without any manipulation after extraction. The results of this study provide insights into the significance of extracting and fully utilizing the hierarchical features.

DOI:
10.1080/01431161.2023.2221800

ISSN:
1366-5901