Fazlali, H; Shirani, S; McDonald, M; Kirubarajan, T (2020). Cloud/haze detection in airborne videos using a convolutional neural network. MULTIMEDIA TOOLS AND APPLICATIONS, 79(39-40), 28587-28601.

In airborne videos surveillance, moving object detection and target tracking are the key steps. However, under bad weather conditions, the presence of clouds and haze or even smoke coming from buildings can make the processing of these videos very challenging. Current cloud detection or classification methods only consider a single image. Moreover, the images they use are often captured by satellites or planes at high altitudes with very long ranges to clouds, which can help distinguish cloudy regions from non-cloudy ones. In this paper, a new approach for cloud and haze detection is proposed by exploiting both spatial and temporal information in airborne videos. In this method, several consecutive frames are divided into patches. Then, consecutive patches are collected as patch sets and fed into a deep convolutional neural network. The network is trained to learn the appearance of clouds as well as their motion characteristics. Therefore, instead of relying on single frame patches, the decision on a patch in the current frame is made based on patches from previous and subsequent consecutive frames. This approach, avoids discarding the temporal information about clouds in videos, which may contain important cues for discriminating between cloudy and non-cloudy regions. Experimental results show that using temporal information besides the spatial characteristics of haze and clouds can greatly increase detection accuracy.