(47-2) 14 * << * >> * Russian * English * Content * All Issues

Application of generative adversarial neural networks for the formation of databases in scanning tunneling microscopy
T.E. Shelkovnikova 1, S.F. Egorov 1, P.V. Gulyaev 1

Institute of Mechanics, Udmurt Federal Research Center, Ural Branch, Russian Academy of Sciences,
426067, Izhevsk, Russia, ul. Baramzinoi 34

 PDF, 1864 kB

DOI: 10.18287/2412-6179-CO-1144

Pages: 314-322.

Full text of article: Russian language.

Abstract:
We discuss the development of a technique for automatic generation of databases of images obtained with a scanning tunneling microscope. An analysis of state-of-the-art methods and means of automatic processing of images obtained from probe and electron microscopes is carried out. We proposed using generative-adversarial networks for generating images taken with a scanning tunneling microscope to form training databases of images. A process of training and comparison of deep convolutional generative adversarial network (DCGAN) architectures using the OpenCV and Keras libraries together with TensorFlow is described, with the best of them identified by computing the metrics IS, FID, and KID. The scaling of images obtained from DCGAN is performed using a method of fine tuning of a super-resolution generative adversarial neural network (SRGAN) and bilinear interpolation based on the Python programming language. An analysis of calculated quantitative metrics values shows that the best results of image generation are obtained using DCGAN96 and SRGAN. It is found that FID and KID metric values for SRGAN method are better than values for bilinear interpolation in all cases except for DCGAN32. All calculations are performed on a GTX GeForce 1070 video card. A method for automatic generation of a scanning tunneling microscope image database based on the stepwise application of DCGAN and SRGAN is developed. Results of generation and comparison of the original image, the one obtained with DCGAN96 and the enlarged image with SRGAN are presented.

Keywords:
STM-image, generative adversarial neural networks, automatic generation method, database, convolution.

Citation:
Shelkovnikova TE, Egorov SF, Gulyaev PV. Application of generative adversarial neural networks for the formation of databases in scanning tunneling microscopy. Computer Optics 2023; 47(2): 314-322. DOI: 10.18287/2412-6179-CO-1144.

References:

  1. Okunev AG, Mashukov MYu, Nartova AV, Matveev AV. Nanoparticle recognition on scanning probe microscopy images using computer vision and deep learning. Nanomaterials 2020; 10(7): 1285. DOI: 10.3390/nano10071285.
  2. Krull A, Hirsch P, Rother C, Schiffrin A, Krull C. Artificial-intelligence-driven scanning probe microscopy. Commun Phys 2020; 3: 54. DOI: 10.1038/s42005-020-0317-3.
  3. Farley S, Hodgkinson JEA, Gordon OM, et al. Improving the segmentation of scanning probe microscope images using convolutional neural networks. Mach Learn: Sci Technol 2020; 2(1): 015015. DOI: 10.1088/2632-2153/abc81c.
  4. Ziatdinov M, Fuchs U, Owen J, Randall J, Kalinin S. Robust multi-scale multi-feature deep learning for atomic and defect identification in Scanning Tunneling Microscopy on H-Si(100) 2x1 surface. arXiv Preprint. 2020. Source: <https://arxiv.org/abs/2002.04716>. DOI: 10.48550/arXiv.2002.04716.
  5. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In Book: Navab N, Hornegger J, Wells WM, Frangi AF, eds. Medical image computing and computer-assisted intervention – MICCAI 2015. Cham: Springer; 2015: 234-241. DOI: 10.1007/978-3-319-24574-4_28.
  6. Shelkovnikov E, Shlyakhtin K, Shelkovnikova T, Egorov S. Application of neural network of U-Net architecture for segmentation of nanoparticles on STM-probes. HFIM 2019; 21(2): 330-336. DOI: 10.15350/17270529.2019.2.36.
  7. Egorov S, Arhipov I, Tatyana S. Information system for segmentation of nanoparticles in STM-images. CEUR Workshop Proc 2020; 2665: 130-134.
  8. Lubbers N, Lookman T, Barros K. Inferring low-dimensional microstructure representations using convolutional neural networks. Phys Rev E 2017; 96: 052111. DOI: 10.1103/PhysRevE.96.052111.
  9. Belianinov A, Vasudevan R, Strelcov E, et al. Big data and deep data in scanning and electron microscopies: deriving functionality from multidimensional data sets. Adv Struct Chem Imag 2015; 1: 6. DOI: 10.1186/s40679-015-0006-6.
  10. Chowdhury A, Kautz E, Yener B, Lewis D. Image driven machine learning methods for microstructure recognition. Comput Mater Sci 2016; 123: 176-187. DOI: 10.1016/j.commatsci.2016.05.034.
  11. Li W, Field KG, Morgan D. Automated defect analysis in electron microscopic images. Npj Comput Mater 2018; 4: 36. DOI: 10.1038/s41524-018-0093-8.
  12. Gavrilov DA. Investigation of the applicability of the convolutional neural network U-Net to a problem of segmentation of aircraft images. Computer Optics 2021; 45(4): 575-579. DOI: 10.18287/2412-6179-CO-804.
  13. Gorbachev VA, Krivorotov IA, Markelov AO, Kotlyarova EV. Semantic segmentation of satellite images of airports using convolutional neural networks. Computer Optics 2020; 44(4): 636-645. DOI: 10.18287/2412-6179-CO-636.
  14. Majurski M, Manescu P, Padi S, et al. Cell image segmentation using generative adversarial networks, transfer learning, and augmentations. 2019 IEEE/CVF Conf on Computer Vision and Pattern Recognition Workshops (CVPRW) 2019: 1114-1122. DOI: 10.1109/CVPRW.2019.00145.
  15. Rühle B, Krumrey JF, Hodoroaba V-D. Workflow towards automated segmentation of agglomerated, non-spherical particles from electron microscopy images using artificial neural networks. Sci Rep 2021; 11: 4942. DOI: 10.1038/s41598-021-84287-6.
  16. Zhang H, Fang C, Xie X, Yang Y, Jin D. High-throughput, high-resolution registration-free generative adversarial network microscopy. Biomed Opt Express 2019; 10(3): 1044-1063. DOI: 10.1364/BOE.10.001044.
  17. Goodfellow IJ, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets. Adv Neural Inf Process Syst 2014; 27.
  18. Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv Preview. 2016. Source: <https://arxiv.org/abs/1511.06434>. DOI: 10.48550/arXiv.1511.06434.
  19. Nair V, Hinton GE. Rectified linear units improve restricted boltzmann machines. Int Conf on Machine Learning (ICML) 2010: 807-814.
  20. Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X. Improved techniques for training GANs. Adv Neural Inf Process Syst 2016: 2234-2242. DOI: 10.48550/arXiv.1606.03498.
  21. Publications and Data. Source: <http://particlesnn.nsu.ru/text/publications>.
  22. Hinton GE, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov RR. Improving neural networks by preventing co-adaptation of feature detectors. arXiv Preview. 2012. Source: <https://arxiv.org/pdf/1207.0580.pdf>.
  23. Mohsen M, Moustafa M. Generating large scale images using GANs. In: Jiang X, Hwang J-N, eds. Eleventh international conference on digital image processing (ICDIP 2019), Guangzhou, China: SPIE; 2019: 195. DOI: 10.1117/12.2540489.
  24. Ledig C, Theis L, Huszar F, et al. Photo-realistic single image super-resolution using a generative adversarial network. arXiv Preprint. 2017. Source: <https://arxiv.org/abs/1609.04802>. DOI: 10.1109/CVPR.2017.19.
  25. Mao X-J, Shen C, Yang Y-B. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. Proc Advances in Neural Information Processing Systems 2016: 2802-2810. DOI: 10.48550/arXiv.1603.09056.
  26. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2016: 770-778. DOI: 10.1109/CVPR.2016.90.
  27. He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. IEEE Int Conf on Computer Vision (ICCV) 2015: 1026-1034. DOI: 10.1109/ICCV.2015.123.
  28. Shi W, Caballero J, Huszár F, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2016: 1874-1883. DOI: 10.1109/CVPR.2016.207.
  29. Barratt S, Sharma R. A note on the inception score. arXiv Preprint. 2018. Source: <https://arxiv.org/pdf/1801.01973.pdf>.
  30. Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S. GANs trained by a two time-scale update rule converge to a local nash equilibrium. Proc 31st Int Conf on Neural Information Processing Systems (NIPS'17) 2017: 6629-6640.
  31. Bińkowski M, Sutherland DJ, Arbel M, Gretton A. Demystifying MMD GANs. ICLR 2018: Int Conf on Learning Representations 2018. Source: <https://openreview.net/pdf?id=r1lUOzWCW>.
  32. wkentaro/labelme. 2016. Source: <https://github.com/wkentaro/labelme>.

© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: journal@computeroptics.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20