Publications

Bhavana, D; Likhita, N; Madhumitha, GV; Ratnam, DV (2023). Machine learning based object-level crop classification of PlanetScope data at South India Basin. EARTH SCIENCE INFORMATICS.

Abstract
Crop classification is of great significance as it opens doors to agricultural inventory and research related to crop mapping, crop yield, and economic analysis. In most studies to date, openly accessible satellites like Landsat, Sentinel, and MODIS are generally used. These satellites suffer from low spatial resolution as well as high revisit rates. Further, the classification in these studies is carried out at pixel level, which is not reliable as per-pixel spectral analysis often gives ambiguous results. Thus, this study proposes the use of high-resolution temporal PlanetScope imagery with an average spatial resolution of 3.7 m for object-based image analysis (OBIA) at the Southern Indian basin at the River Krishna. This area is abundant with a variety of crops like paddy, chilli, maize, lily, colocasia, curry leaves, mint, bhindi, banana, betel leaves, and sugarcane. The latest available ground truth data was from 2017 and hence, this has been used. Despite this data being a year older than the one used in a previous study, good and commendable accuracies were still produced by this work. The machine learning (ML) modelling and evaluation was carried out in Python while Google Earth Engine (GEE) was used as the primary platform for feature extraction, dataset preparation, and visualization of the final crop-classified image. For the purpose of object-based classification, this paper puts forward Support Vector Machines (SVM). To provide a comparative analysis to justify the performance of modelled SVM, several other ML algorithms like Convolution Neural Networks (CNN), Random Forest (RF), Artificial Neural Network (ANN), and Bayes Classifier were then modelled for this methodology in this study area. The results show that SVM performed best, with a high accuracy of 94.3%. All other algorithms modelled showed less accuracy comparatively, but were still above 75%.

DOI:
10.1007/s12145-022-00922-4

ISSN:
1865-0481