(43-5) 21 * << * >> * Russian * English * Content * All Issues

Structure-functional analysis and synthesis of deep convolutional neural networks

Yu.V. Vizilter1, V.S. Gorbatsevich1, S.Y. Zheltov1

Federal State Unitary Enterprise “State Research Institute of Aviation Systems” (FGUP “GosNIIAS”), Moscow, Russia

 PDF, 1380 kB

DOI: 10.18287/2412-6179-2019-43-5-886-900.

Pages: 886-900.

Full text of article: Russian language.

Abstract:
A general approach to a structure-functional analysis and synthesis (SFAS) of deep neural networks (CNN). The new approach allows to define regularly: from which structure-functional elements (SFE) CNNs can be constructed; what are required mathematical properties of an SFE; which combinations of SFEs are valid; what are the possible ways of development and training of deep networks for analysis and recognition of an irregular, heterogeneous data or a data with a complex structure (such as irregular arrays, data of various shapes of various origin, trees, skeletons, graph structures, 2D, 3D, and ND point clouds, triangulated surfaces, analytical data descriptions, etc.) The required set of SFE was defined.  Techniques were proposed that solve the problem of structure-functional analysis and synthesis of a CNN using SFEs and rules for their combination.

Keywords:
deep neural networks, machine learning, data structures.

Citation:
Vizilter YuV, Gorbatsevich VS, Zheltov SY. Structure-functional analysis and synthesis of deep convolutional neural networks. Computer Optics 2019; 43(5): 886-900. DOI: 10.18287/2412-6179-2019-43-5-886-900.

Acknowledgements:
The work was funded by Russian Science Foundation (RSF), grant No. 16-11-00082.

References:

  1. Krizhevsky A, Sutskever I, Hinton G. ImageNet classification with deep convolutional neural networks. Proceedings of the 25th International Conference on Neural Information Processing Systems 2012; 1: 1106-1114.
  2. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2015: 1-9.
  3. Wu X. Learning robust deep face representation. Source: <https://arxiv.org/pdf/1507.04844.pdf>.
  4. Lin M, Chen Q, Yan S. Network in network. Source: <https://arxiv.org/pdf/1312.4400.pdf>.
  5. Forrest N, Song H, Matthew VM, Khalid A, William JD, Kurt K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. Source: <https://arxiv.org/abs/1602.07360>.
  6. Szegedy C, Ioffe S, Vanhoucke V. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. Source: <https://arxiv.org/abs/1602.07261>.
  7. Larsson G, Maire M, Shakhnarovich G. Fractalnet: ultra-deep neural networks without residuals. Source: <https://arxiv.org/abs/1605.07648>.
  8. Duvenaud D, Maclaurin D, Aguilera-Iparraguirre J, Gomez-Bombarelli R, Hirzel T, Aspuru-Guzik A, Adams RP. Convolutional networks on graphs for learning molecular fingerprints.  Source: <https://arxiv.org/pdf/1509.09292.pdf>.
  9. De Cao N, Kipf T. MolGAN: An implicit generative model for small molecular graphs. Source: <https://arxiv.org/pdf/1509.09292.pdf>.
  10. Gomes J, Ramsundar B, Feinberg EN, Pande VS. Atomic convolutional networks for predicting protein-ligand binding affinity. Source: <https://arxiv.org/pdf/1703.10603.pdf>.
  11. Yao L, Mao C, Luo Y. Graph convolutional networks for text classification. Source: <https://arxiv.org/pdf/1809.05679.pdf>.
  12. Xiong W, Hoang T, Wang WY. DeepPath: A reinforcement learning method for knowledge graph reasoning. Source: <https://arxiv.org/pdf/1707.06690.pdf>.
  13. Maturana D, Scherer S. VoxNet: A 3D convolutional neural network for real-time object recognition. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2015: 922-928. DOI: 10.1109/IROS.2015.7353481.
  14. Tran D., Bourdev L, Fergus R. Learning spatiotemporal features with 3D convolutional networks. IEEE International Conference on Computer Vision (ICCV) 2015: 4489-4497. DOI: 10.1109/ICCV.2015.510.
  15. Riegler G, Ulusoys AO, Geiger A. Octnet: Learning deep 3D representations at high resolutions. CVPR 2017: 6620-6629.
  16. Riegler G, Ulusoy AO, Bischof H, Geiger A. OctNetFusion: Learning depth fusion from data. 3DV 2017: 57-66.
  17. Klokov R, Lempitsky V. Escape from cells: Deep KdNetworks for the recognition of 3D point cloud models. ICCV 2017: 863-872.
  18. Qi CR, Su H, Mo K, Guibas LJ. PointNet: Deep learning on point sets for 3D classification and segmentation. CVPR 2017: 652-660.
  19. Qi CR, Yi L, Su H, Guibas L. PointNet++: Deep hierarchical feature learning on point sets in a metric space. NIPS 2017: 5105-5114.
  20. Bruna J, Zaremba W, Szlam A, LeCun Y. Spectral networks and locally connected networks on graphs. ICLR 2014.
  21. Henaff M, Bruna J, LeCun Y. Deep convolutional networks on graph-structured data. Source: ,https://arxiv.org/pdf/1506.05163.pdf>.
  22. Defferrard M, Bresson X, Vandergheynst P. Convolutional neural networks on graphs with fast localized spectral filtering. Source: ,https://arxiv.org/pdf/1606.09375.pdf>.
  23. Sinha A, Bai J, Ramani K. Deep learning 3D shape surfaces using geometry images. ECCV 2016: 223-240.
  24. Maron H, Galun M, Aigerman N, Trope M, Dym N, Yumer E, Kim VG, Lipman Y. Convolutional neural networks on surfaces via seamless toric covers. ACM Trans Graph 2017; 36(4): 71.
  25. Ezuz D, Solomon J, Kim VG, Ben-Chen M. GWCNN: A metric alignment layer for deep shape analysis. Computer Graphics Forum 2017; 36(5): 49-57.
  26. Masci J, Boscaini D, Bronstein M, Vandergheynst P. Geodesic convolutional neural networks on Riemannian manifolds. ICCV 2015: 832-840.
  27. Boscaini D, Masci J, Rodoià E, Bronstein MM. Learning shape correspondence with anisotropic convolutional neural networks. NIPS 2016: 3197-3205.
  28. Vizilter Yu, Kostromov N, Vorotnikov A, Gorbatsevich V. Real-time face identification via CNN and boosted hashing forest. The 29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016; 78-86.
  29. Serra J. Image analysis and mathematical morphology. London: Academic Press Inc; 1982.
  30. Vizilter YuV, Gorbatsevich VS, Zheltov SY, Rubis AY, Vorotnikov AV. Morphlets: a new class of tree-structured morphological descriptors of image shape [In Russian]. Computer Optics 2015; 39(1): 101-108. DOI: 10.18287/0134-2452-2015-39-1-101-108.
  31. Su H, Maji S, Kalogerakis E, Learned-Miller EG. Multi-view convolutional neural networks for 3D shape recognition. ICCV 2015: 945-953.
  32. Huang H, Kalegorakis E, Chaudhuri S, Ceylan D, Kim V, Yumer E. Learning local shape descriptors with view-based convolutional neural networks. ACM Trans Graph 2018; 37(1): 6.
  33. Wu Z, Song S, Khosla A, Yu F, Zhang L, Tang X, Xiao J. 3D shapenets: A deep representation for volumetric shapes. CVPR 2015; 1912-1920.
  34. Maturana D, Scherer S. 3D convolutional neural networks for landing zone detection from LiDAR. ICRA 2015; 3471-3478.
  35. Qi CR, Su H, Niener M, Dai A, Yan M, Guibas LJ. Volumetric and multi-view CNNs for object classification on 3D data. CVPR 2016; 5648-5656.
  36. Sedaghat N, Zolfaghari M, Amiri E, Brox T. Orientation-boosted voxel nets for 3D object recognition. BMVC 2017; 97.
  37. Graham B, van der Maaten L. Submanifold sparse convolutional networks. Source: <https://arxiv.org/pdf/1706.01307.pdf>.

 


© 2009, IPSI RAS
Россия, 443001, Самара, ул. Молодогвардейская, 151; электронная почта: ko@smr.ru ; тел: +7 (846) 242-41-24 (ответственный секретарь), +7 (846) 332-56-22 (технический редактор), факс: +7 (846) 332-56-20