(49-5) 18 * << * >> * Russian * English * Content * All Issues

Surface classification on a 3D cardiac ventricular model using machine learning
V.D. Dordyuk 1, R.O. Rokeakh 1,2, T.V. Chumarnaya 1,2, O.E. Solovyova 1,2

Ural Federal University,
Mira str. 19, Yekaterinburg, 620062, Russia;
Institute of Immunology and Physiology UrB RAS,
Pervomayskaya str. 106, Yekaterinburg, 620078, Russia

 PDF, 1533 kB

DOI: 10.18287/2412-6179-CO-1628

Pages: 853-869.

Full text of article: Russian language.

Abstract:
This work improves techniques for the classification of cardiac ventricular surfaces on a polygonal surface mesh in the context of small datasets. This task is reduced to a multi-class classification of a point on a surface mesh. Machine learning models are trained to classify the polygonal mesh vertex based on the values of a signed distance function in the neighborhood of this vertex. Machine learning models are compared, including FCNN, U-Net, and ResNet neural nets, and classifiers from the scikit-learn library. In addition to accuracy measures, the suitability of the classification results for constructing the biventricular coordinate system is assessed. A graph algorithm is proposed for correcting potential classification errors and its effectiveness is demonstrated. Models using neural networks are found to be the most effective. Less resource-demanding models that exhibited comparable performance are the Random Forest and Support Vector Classifier with Stochastic Gradient Descent.

Keywords:
computer vision, machine learning, neural networks, surface meshes, digital heart models, geometric heart models.

Citation:
DordyukVD, Rokeakh RO, Chumarnaya TV, Solovyova OE. Surface classification on a 3D cardiac ventricular model using machine learning. Computer Optics 2025; 49(5): 853-869. DOI: 10.18287/2412-6179-CO-1628.

Acknowledgements:
The research funding from the Ministry of Science and Higher Education of the Russian Federation (Ural Federal University Program of Development within the Priority-2030 Program) is gratefully acknowledged.

References:

  1. Cerqueira MD, Weissman NJ, Dilsizian V, et al. Standardized myocardial segmentation and nomenclature for tomographic imaging of the heart. Circulation 2002; 105(4): 539-542. DOI: 10.1161/hc0402.102975.
  2. Bayer J, Prassl AJ, Pashaei A, et al. Universal ventricular coordinates: A generic framework for describing position within the heart and transferring data. Med Image Anal 2018; 45: 83-93. DOI: 10.1016/j.media.2018.01.005.
  3. Roney CH, Pashaei A, Meo M, et al. Universal atrial coordinates applied to visualisation, registration and construction of patient specific meshes. Med Image Anal 2019; 55: 65-75. DOI: 10.1016/j.media.2019.04.004.
  4. Schuler S, Pilia N, Potyagaylo D, Loewe A. Cobiveco: Consistent biventricular coordinates for precise and intuitive description of position in the heart – with matlab implementation. Med Image Anal 2021; 74: 102247. DOI: 10.1016/j.media.2021.102247.
  5. Pankewitz LR, Hustad KG, Govil S, et al. A universal biventricular coordinate system incorporating valve annuli: Validation in congenital heart disease. Med Image Anal 2024; 93: 103091. DOI: 10.1016/j.media.2024.103091.
  6. Qi CR, Su H, Mo K, Guibas LJ. Pointnet: Deep learning on point sets for 3D classification and segmentation. arXiv Preprint. 2017. Source: <https://arxiv.org/abs/1612.00593>. DOI: 10.48550/arXiv.1612.00593.
  7. Qian G, Li Y, Peng H, et al. Pointnext: Revisiting PointNet++ with improved training and scaling strategies. arXiv Preprint. 2022. Source: <https://arxiv.org/abs/2206.04670>. DOI: 10.48550/arXiv.2206.04670.
  8. Luo C, Cheng N, Ma S, et al. Mini-PointNetPlus: A local feature descriptor in deep learning model for 3d environment perception. arXiv Preprint. 2023. Source: <https://arxiv.org/abs/2307.13300>. DOI: 10.48550/arXiv.2307.13300.
  9. Hanocka R, Hertz A, Fish N, Giryes R, Fleishman S, Cohen-Or D. Meshcnn: A network with an edge. ACM Trans Graph 2019; 38(4): 90. DOI: 10.1145/3306346.3322959.
  10. Ludwig I, Tyson D, Campen M. Halfedgecnn for native and flexible deep learning on triangle meshes. ComputGraph Forum 2023; 42(5): e14898. DOI: 10.1111/cgf.14898.
  11. Wu Z, Song S, Khosla A, et al. 3DShapeNets: A deep representation for volumetric shapes. arXiv Preprint. 2015. Source: <https://arxiv.org/abs/1406.5670>. DOI: 10.48550/arXiv.1406.5670.
  12. Wang X, Liu J, Mei T, Luo J. CoSeg: Cognitively inspired unsupervised generic event segmentation. arXiv Preprint. 2021. Source: <https://arxiv.org/abs/2109.15170>. DOI: 10.48550/arXiv.2109.15170.
  13. Dordiuk V, Dzhigil M, Ushenin K. Surface mesh segmentation based on geometry features. 2023 IEEE Ural-Siberian Conf on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT) 2023: 270-273. DOI: 10.1109/USBEREIT58508.2023.10158888.
  14. Hansen B, Lowes M, Ørkild T, et al. Sparsemeshcnn with self-attention for segmentation of large meshes. Proceedings of the Northern Lights Deep Learning Workshop 2022; 3: 1-9. DOI: 10.7557/18.6281.
  15. Dokuchaev A, Chumarnaya T, Bazhutina A, et al. Combination of personalized computational modeling and machine learning for optimization of left ventricular pacing site in cardiac resynchronization therapy. Front Physiol 2023; 14: 1162520. DOI: 10.3389/fphys.2023.1162520.
  16. Marian42/Mesh_to_sdf. 2020. Source: <https://github.com/marian42/mesh_to_sdf>.
  17. Pedregosa F, Varoquaux G, Gramfort A, et al. Scikit-learn: Machine learning in Python. J Mach Learn Res 2011; 12: 2825-2830.
  18. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. arXiv Preprint. 2015. Source: <https://arxiv.org/abs/1512.03385>. DOI: 10.48550/arXiv.1512.03385.
  19. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. arXiv Preprint. 2015. Source: <https://arxiv.org/abs/1505.04597>. DOI: 10.48550/arXiv.1505.04597.

© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: journal@computeroptics.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20