(49-6) 16 * << * >> * Russian * English * Content * All Issues

Recognition of arable lands on the territory of Samara region using satellite images for solving land use problems
A.Y. Bavrina 1, A.A. Agafonov 1

Samara National Research University,
443086, Samara, Russia, Moskovskoye Shosse 34

 PDF, 2227 kB

DOI: 10.18287/2412-6179-CO-1754

Pages: 1002-1011.

Full text of article: Russian language.

Abstract:
The paper presents a technology for recognizing arable land from remote sensing images to solve land use problems at the regional level of the Russian Federation. The application of modern deep learning methods to identify the arable land boundaries from both single and a series of medium-resolution Sentinel-2 images is being investigated. According to research, the best quality can be achieved using the UPerNet architecture when extracting multiscale features using Swin Transformer v2 algorithm. The resulting vector layer of arable land is used to solve the problem of detecting illegal plowing of specially protected natural areas. The work makes a significant contribution to improving the efficiency of regional natural resource management systems, demonstrating how the use of artificial intelligence and remote sensing images helps to automate the solution of land use problems.

Keywords:
Earth remote sensing images, arable land, land use, protected areas, deep neural networks, semantic segmentation.

Citation:
Bavrina AY, Agafonov AA. Recognition of arable lands on the territory of Samara region using satellite images for solving land use problems. Computer Optics 2025; 49(6): 1002-1011. DOI: 10.18287/2412-6179-CO-1754.

Acknowledgements:
This work was financially supported by the Russian Science Foundation under project No. 23-11-20013.

References:

  1. Soifer VA, ed. Perspective information technologies of remote sensing of the Earth [In Russian]. Samara: "Novaya Tekhnika" Publisher; 2015. ISBN: 978-5-88940-138-4.
  2. Soifer VA, Sergeev VV, Kopenkov VN, Chernov AV. Earth remote sensing and geographic information systems. Pattern Recognit Image Anal 2023; 33(4): 1129-1141. DOI: 10.1134/S1054661823040454.
  3. Yakushev VP, Dubenok NN, Loupian EA. Earth remote sensing technologies for agriculture: application experience and development prospects [In Russian]. Sovr Probl DZZ Kosm 2019; 16(3): 11-23. DOI: 10.21046/2070-7401-2019-16-3-11-23.
  4. Proceedings of the III All-Russian scientific conference with international participation "Application of remote sensing of the Earth in agriculture" [In Russian]. Saint-Petersberg: FGBNU AFI Publisher; 2021. ISBN: 978-5-905200-47-2.
  5. Weiss M, Jacob F, Duveiller G. Remote sensing for agricultural applications: A meta-review. Remote Sens Environ 2019; 236(5): 111402. DOI: 10.1016/j.rse.2019.111402.
  6. Xu F, Yao X, Zhang K, Yang H, Feng Q, Li Y, Yan S, Gao B, Li S, Yang J, Zhang C, Lv Y, Zhu D, Ye S. Deep learning in cropland field identification: A review. Comput Electron Agric 2024; 222: 109042. DOI: 10.1016/j.compag.2024.109042.
  7. Keskes MI. Review of the current state of deep learning applications in agriculture. OSFPreprints. 2025. Source: <https://osf.io/preprints/osf/xg4rs_v2>. DOI: 10.31219/osf.io/xg4rs_v2.
  8. Li J, Cai Y, Li Q, Kou M, Zhang T. A review of remote sensing image segmentation by deep learning methods. Int J Digit Earth 2024; 17(1): 2328827. DOI: 10.1080/17538947.2024.2328827.
  9. Kerner H, Chaudhari S, Ghosh A, Robinson C, Ahmad A, Choi E, Jacobs N, Holmes C, Mohr M, Dodhia R, Lavista Ferres J, Marcus J. Fields of the World: a machine learning benchmark dataset for global agricultural field boundary segmentation. arXiv Preprint. 2024. Source: <https://arxiv.org/abs/2409.16252>. DOI: 10.48550/arXiv.2409.16252.
  10. Osipov YS, ed. The great russian encyclopedia [In Russian]. Moscow: "Bolshaya Rossiyskaya Encyclopedia" Publisher; 2004. ISBN: 5-85270-320-6.
  11. Bauer ME, Cipra JE. Identification of agricultural crops by computer processing of ERTS MSS data. Proc Symp on Significant Results from ERTS-l 1973: 205-212.
  12. Gonzalez RC, Woods RE. Digital image processing. 2nd ed. Prentice Hall Publisher; 2002. ISBN: 0201180758.
  13. Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In Book: Navab N, Hornegger J, Wells WM, Frangi AF, eds. Medical image computing and computer-assisted intervention – MICCAI 2015. 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III. Dordrecht: Springer International Publishing Switzerland; 2015: 234-241. DOI: 10.1007/978-3-319-24574-4_28.
  14. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J, Houlsby N. An image is worth 16x16 words: transformers for image recognition at scale. arXiv Preprint. 2024. Source: <https://arxiv.org/abs/2010.11929>. DOI: 10.48550/arXiv.2010.11929.
  15. He K, Zhang X, Ren S, Sun J. Identity mappings in deep residual networks. In Book: Leibe B, Matas J, Sebe N, Welling M, eds. Computer vision – ECCV 2016. 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV. Cham, Switzerland: Springer International Publishing AG; 2016: 630-645. DOI: 10.1007/978-3-319-46493-0_38.
  16. Lin T, Dollar P, Girshick R, He K, Hariharan B, Belongie S. Feature Pyramid Networks for object detection. 2017 IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2017; 936-944. DOI: 10.1109/CVPR.2017.106.
  17. Xiao T, Liu Y, Zhou B, Jiang Y, Sun J. Unified perceptual parsing for scene understanding. In Book: Ferrari V, Hebert M, Sminchisescu C, Weiss Y, eds. Computer Vision – ECCV 2018: 15th European Conference, Munich, Germany, September 8–14, 2018, Proceedings, Part V. Berlin, Heidelberg: Springer-Verlag; 2018: 432-448. DOI: 10.1007/978-3-030-01228-1_26.
  18. Li Z, Wang Y, Tian F, Zhang J, Chen Y, Li K. BAFormer: A novel boundary-aware compensation UNet-like transformer for high-resolution cropland extraction. Remote Sens 2024; 16(14): 2526. DOI: 10.3390/rs16142526.
  19. Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille A. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell 2018; 40(4): 834-848. DOI: 10.1109/TPAMI.2017.2699184.
  20. Zhao H, Shi J, Qi X, Wang X, Jia J. Pyramid scene parsing network. 2017 IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2017: 6230-6239. DOI: 10.1109/CVPR.2017.660.
  21. Liu Y, Zhang T, Huang Y, Shi F. An edge-aware multitask network based on CNN and transformer backbone for farmland instance segmentation. IEEE J Sel Top Appl Earth Obs Remote Sens 2024; 17: 13765-13779. DOI: 10.1109/JSTARS.2024.3435425.
  22. Luo W, Zhang C, Li Y, Yan Y. MLGNet: multi-task learning network with attention-guided mechanism for segmenting agricultural fields. Remote Sens 2023; 15: 3934. DOI: 10.3390/rs15163934.
  23. Diakogiannis F, Waldner F, Caccetta P, Wu C. ResUNet-a: a deep learning framework for semantic segmentation of remotely sensed data. ISPRS J Photogramm Remote Sens 2020; 162: 94-114. DOI: 10.1016/j.isprsjprs.2020.01.013.
  24. Xu L, Yang P, Yu J, Peng F, Xu J, Song S, Wu Y. Extraction of cropland field parcels with high resolution remote sensing using multi-task learning. Eur J Remote Sens 2023; 56(1): 2181874. DOI: 10.1080/22797254.2023.2181874.
  25. Matton N, Sepulcre Canto G, Waldner F, Valero S, Morin D, Inglada J, Arias M, Bontemps S, Koetz B, Defourny P. An automated method for annual cropland mapping along the season for various globally-distributed agrosystems using high spatial and temporal resolution time series. Remote Sens 2015; 7: 13208-13232. DOI: 10.3390/rs71013208.
  26. Plotnikov DE, Kolbudaev PA, Bartalev SA, Lupyan EA. Automatic recognition of used arable lands based on seasonal time series of reconstructed Landsat images [In Russian]. Sovremennye Problemy Distantsionnogo Zondirovaniya Zemli iz Kosmosa 2018; 15(2): 112-127. DOI: 10.21046/2070-7401-2018-15-2-112-127.
  27. Zhao X, Wang X, Cao G, Chen K, Tang W, Zhang Z. Crop identification by using seasonal parameters extracted from time series Landsat images in a mountainous agricultural county of Eastern Qinghai Province, China. J Agric Sci 2017; 9(4): 116-127. DOI: 10.5539/jas.v9n4p116.
  28. Bartalev SA, Egorov VA, Loupian EA, Plotnikov DE, Uvarov IA. Recognition of arable lands using multi-annual satellite data from spectroradiometer modis and locally adaptive supervised classification. Computer Optics 2011; 35(1): 103-116.
  29. Zhang H, Liu M, Wang Y, Shang J, Liu X, Li B, Song A, Li Q. Automated delineation of agricultural field boundaries from Sentinel-2 images using recurrent residual U-Net. Int J Appl Earth Obs Geoinf 2021; 105: 102557. DOI: 10.1016/j.jag.2021.102557.
  30. Jong M, Guan K, Wang S, Huang Y, Peng B. Improving field boundary delineation in ResUNets via adversarial deep learning. Int J Appl Earth Obs Geoinf 2022; 112: 102877. DOI: 10.1016/j.jag.2022.102877.
  31. Waldner F, Diakogiannis FI. Deep learning on edge: Extracting field boundaries from satellite images with a convolutional neural network. Remote Sens Environ 2020; 245: 111741. DOI: 10.1016/j.rse.2020.111741.
  32. Yan S, Yao X, Sun J, Huang W, Yang L, Zhang C, Gao B, Yang J, Yun W, Zhu D. TSANet: A deep learning framework for the delineation of agricultural fields utilizing satellite image time series. Comput Electron Agric 2024; 220: 108902. DOI: 10.1016/j.compag.2024.108902.
  33. Turkoglu MO, D'Aronco S, Perich G, Liebisch F, Streit C, Schindler K, Wegner JD. Crop mapping from image time series: Deep learning with multi-scale label hierarchies. Remote Sens Environ 2021; 264: 112603. DOI: 10.1016/j.rse.2021.112603.
  34. Bykova D, Denisova A, Fedoseev V, Korchikov E. Methods for updating forest inventory data through multi-temporal Sentinel-2 image analysis. In Book: Bajaj A, Abraham A, Reddy Madhavi K, Castillo O, eds. Proceedings of the 15th International Conference on Soft Computing and Pattern Recognition (SoCPaR 2023). Volume 4: Real World Applications. Cham, Switzerland: Springer Nature Switzerland AG; 2025: 258-266. DOI: 10.1007/978-3-031-81086-2_29.
  35. Zahid S, Ghuffar S, Rehman, OU, Shah SRA. Deep Learning for automated multi-scale functional field boundaries extraction using multi-date Sentinel-2 and PlanetScope imagery: Case study of Netherlands and Pakistan. arXiv Preprint. 2024. Source: <https://arxiv.org/abs/2411.15923>. DOI: 10.48550/arXiv.2411.15923.
  36. Song W, Wang C, Dong T, Wang Z, Wang C, Mu X, Zhang H. Hierarchical extraction of cropland boundaries using Sentinel-2 time-series data in fragmented agricultural landscapes. Comput Electron Agric 2023; 212: 108097. DOI: 10.1016/j.compag.2023.108097.
  37. Goodfellow I, Bengio Y, Courville A. Deep learning. Cambridge: The MIT Press; 2016. ISBN: 9780262035613.
  38. Wang Z, Li J, Tan Z, Liu X, Li M. Swin-UperNet: A semantic segmentation model for mangroves and spartina alterniflora loisel based on UperNet. Electronics 2023; 12(5): 1111. DOI: 10.3390/electronics12051111.
  39. Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B. Swin transformer: hierarchical vision transformer using shifted windows. 2021 IEEE/CVF Int Conf on Computer Vision (ICCV) 2021: 9992-10002. DOI: 10.1109/ICCV48922.2021.00986.
  40. Douglas D, Peucker T. Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. The Canadian Cartographer 1973; 10(2): 112-122. DOI: 10.3138/FM57-6770-U75U-7727.
  41. Kavelenova L, Prokhorova N, Fedoseyev V. On the experience of field monitoring and remote sensing technologies integration in regional phytodiversity conservation. E3S Web Conf 2023; 419: 02013. DOI: 10.1051/e3sconf/202341902013.
  42. Samara-Informsputnik. 2025. Source: <https://samis.geosamara.ru/>.

© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: journal@computeroptics.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20