(46-2) 13 * << * >> * Russian * English * Content * All Issues

Vehicle wheel weld detection based on improved YOLO v4 algorithm
T.J. Liang 1,2, W.G. Pan 1,2, H. Bao 1,2, F. Pan 1,2

Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing, China,
College of Robotics, Beijing Union University, Beijing, China

 PDF, 2233 kB

DOI: 10.18287/2412-6179-CO-887

Pages: 271-279.

Full text of article: English language.

Abstract:
In recent years, vision-based object detection has made great progress across different fields. For instance, in the field of automobile manufacturing, welding detection is a key step of weld inspection in wheel production. The automatic detection and positioning of welded parts on wheels can improve the efficiency of wheel hub production. At present, there are few deep learning based methods to detect vehicle wheel welds. In this paper, a method based on YOLO v4 algorithm is proposed to detect vehicle wheel welds. The main contributions of the proposed method are the use of k-means to optimize anchor box size, a Distance-IoU loss to optimize the loss function of YOLO v4, and non-maximum suppression using Distance-IoU to eliminate redundant candidate bounding boxes. These steps improve detection accuracy. The experiments show that the improved methods can achieve high accuracy in vehicle wheel weld detection (4.92 % points higher than the baseline model with respect to AP75 and 2.75 % points higher with respect to AP50). We also evaluated the proposed method on the public KITTI dataset. The detection results show the improved method’s effectiveness.

Keywords:
object detection, vehicle wheel weld, YOLO v4, DIoU.

Citation:
Liang TJ, Pan WG, Bao H, Pan F. Vehicle wheel weld detection based on improved YOLO v4 algorithm. Computer Optics 2022; 46(2): 271-279. DOI: 10.18287/2412-6179-CO-887.

Acknowledgements:
The work was funded by the National Natural Science Foundation of China (Nos. 61802019, 61932012, 61871039) and the Beijing Municipal Education Commission Science and Technology Program (Nos. KM201911417009, KM201911417003, KM201911417001). Beijing Union University Research and Innovation Projects for Postgraduates (No.YZ2020K001).

References:

  1. Viola P, Jones M. Robust real-time object detection. Int J Comput Vis 2004; 57(2): 137-154.
  2. Chen TT, Wang RL, Dai B, Liu DX, Song JZ. Likeli-hood-field-model-based dynamic vehicle detection and tracking for self-driving. IEEE trans Intell Transp Syst 2016; 17(11): 3142-3158.
  3. Fu ZH, Chen YW, Yong HW, Jiang RX, Zhang L, Hua XS. Foreground gating and background refining network for surveillance object detection. IEEE Trans Image Process 2019; 28(12): 6077-6090.
  4. Kong H, Yang J, Chen ZH. Accurate and efficient inspection of speckle and scratch defects on surfaces of planar products. IEEE Trans Industr Inform 2017; 13(4): 1855-1865.
  5. Guo ZX, Shui PL. Anomaly based sea-surface small target detection using k-nearest neighbour classification. IEEE Trans Aerosp Electron Syst 2020; 56(6): 4947-4964.
  6. Imoto K, Nakai T, Ike T, Haruki K, Sato Y. A CNN-based transfer learning method for defect classification in semiconductor manufacturing. IEEE Trans Semicond Manuf 2019, 32(4): 455-459.
  7. Pashina TA, Gaidel AV, Zelter PM, Kapishnikov AV, Nikonorov AV. Automatic highlighting of the region of interest in computed tomography images of the lungs. Computer Optics 2020; 44(1): 74-81. DOI: 10.18287/2412-6179-CO-659.
  8. Zou ZX, Shi ZW, Guo YH, Ye JP. Object detection in 20 years: A survey. arXiv Preprint 2019. Source: <https://arxiv.org/abs/1905.05055>.
  9. Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE 1998, 86(11): 2278-2324.
  10. Lowe DG. Object recognition from local scale-invariant features. IEEE Int Conf on Computer Vision, Kerkyra 1999: 1150-1157. DOI: 10.1109/ICCV.1999.790410.
  11. Dalal N, Triggs B. Histograms of oriented gradients for human detection. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego 2005: 886-893. DOI: 10.1109/CVPR.2005.177.
  12. Suykens JAK, Vandewalle J. Least squares support vector machine classifiers. Neural Process Lett 1999; 9: 293-300.
  13. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Int Conf on Neural Information Processing Systems, New York 2012: 1097-1105.
  14. Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: Unified, real-time object detection. IEEE Conf on Computer Vision and Pattern Recognition, Las Vegas 2016: 779-788. DOI: 10.1109/CVPR.2016.91.
  15. Redmon J, Farhadi A. YOLO9000: better, faster, stronger. IEEE Conf on Computer Vision and Pattern Recognition, Honolulu 2017: 7263-7271. DOI: 10.1109/CVPR.2017.690.
  16. Redmon J, Farhadi A. Yolov3: An incremental improvement. arXiv Preprint 2018. Source: <https://arxiv.org/abs/1804.02767>.
  17. Bochkovskiy A, Wang CY, Mark-Liao HY. YOLOv4: Optimal speed and accuracy of object detection. arXiv Preprint 2020. Source: <arXiv:2004.10934>.
  18. Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY,Berg AC. SSD: Single shot multibox detector, European Conf on Computer Vision European, Cham 2016: 21-37.
  19. Gidaris S, Komodakis N. Object detection via a multi-region and semantic segmentation-aware CNN model. Int Conf on Computer Vision, Santiago 2015: 1134-1142. DOI: 10.1109/ICCV.2015.135.
  20. Girshick R. Fast R-CNN. Int Conf on Computer Vision, Santiago 2015: 1440-1448. DOI: 10.1109/ICCV.2015.169.
  21. Ren SQ, He KM, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 2016; 39(6): 1137-1149.
  22. He KM, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. IEEE Int Conf on Computer Vision, Venice 2017: 2980-2988. DOI: 10.1109/ICCV.2017.322.
  23. Cai ZW, Vasconcelos N. Cascade R-CNN: Delving into high quality object detection. IEEE Conf on Computer Vision and Pattern Recognition, Salt Lake City 2018: 6154-6162. DOI: 10.1109/CVP R.2018.00644.
  24. Zhou HY, Zhuang ZL, Liu Y, Liu Y, Zhang X. Defect classification of green plums based on deep learning. Sensors 2020; 20(23): 6993.
  25. Huang LC, Yang Y, Deng YF, Yu YN. Densebox: Unifying landmark localization with end to end object detection. arXiv Preprint 2015. Source: <https://arxiv.org/abs/1509.04874>.
  26. Rezatofighi H, Tsoi N, Gwak JY, Sadeghian A, Reid L, Savarese S. Generalized intersection over union: A metric and a loss for bounding box regression. IEEE Conf on Computer Vision and Pattern Recognition, Long Beach 2019: 658-666. DOI: 10.1109/CVPR.2019.00075.
  27. Everingham M, Gool LV, Williams CKI, Winn J, Zisserman A. The PASCAL visual object classes (VOC) Challenge. Int J Comput Vis 2010; 88: 303-338.
  28. Wang CY, Mark-Liao HY, Wu YH, Chen PY, Hsieh JW, Yeh IH. CSPNet: A new backbone that can enhance learning capability of CNN. IEEE Conf on Computer Vision and Pattern Recognition Workshops 2020: 1571-1580. DOI: 10.1109/CVPRW50498.2020.00203.
  29. Zheng ZH, Wang P, Liu W, Li JZ, Ye RG, Ren DW. Distance-IoU Loss: Faster and better learning for bounding box regression. arXiv Preprint 2019. Source: <https://arxiv.org/abs/1911.08287>.
  30. Bodla N, Singh B, Chellappa R, Davis LS. Soft-NMS – Improving object detection with one line of code. IEEE Int Conf on Computer Vision, Venice 2017: 5562-5570. DOI: 10.1109/ICCV.2017.593.
  31. He YH, Zhang XY, Savvides M, Kitani K. Bounding box regression with uncertainty for accurate object detection. arXiv Preprint 2018. Source: <https://arxiv.org/abs/1809.08545>.
  32. Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? The KITTI vision benchmark suite. IEEE Conf on Computer Vision and Pattern Recognition 2012; 3354-3361. DOI: 10.1109/CVPR.2012.6248074.

© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: journal@computeroptics.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20