Application of deep neural networks for identification of alphanumeric information from baggage tags at airport

Authors


Master's student of the Department "Basic Department" AMiU "
Russia, Don State Technical University
123ivliev123@mail.ru


Candidate of Technical Sciences, Associate Professor, Dean of the Faculty "Automation of Mechatronics and Control"
Russia, Don State Technical University
pobuhov@spark-mail.ru

Abstract

The article is devoted to the development and analysis of methods of identifying dynamic objects. A neural network with the architecture of SSD InceptionV2 has been developed to solve the problem of detecting luggage tags and barcodes. Several approaches are considered to solve the problem of identifying digital-letter information: Tesseract, SSD InceptionV2, OpenCV and a fully connected neural network. The operability of the methods on real images has been tested.

Keywords

computer vision, neural network, barcode, IATA airport code, TensorFlow, OpenCV, Python.

Print Friendly, PDF & Email

Categories of article:

Read also

Project finance

The article was prepared with the support and within the framework of the DR-2020 event "International Competition of Scientific Works and Projects of Young Researchers" Digital Region - 2020 "" (Science and Education on-line)

Suggested citation

Ivliyev Yevgeniy Andreyevich , Obukhov Pavel Serafimovich
Application of deep neural networks for identification of alphanumeric information from baggage tags at airport// Modern Management Technology. ISSN 2226-9339. – #3 (93). Art. # 9305. Date issued: . Available at: https://sovman.ru/en/article/9305/

Full article text is available only in Russian.
Please select from the menu Russian language and continue reading.


References

  1. Obukhov P.S., Ivliyev Ye.A., Ivliyev V.A. Identification of alphanumeric information from a baggage tag based on a neural network [Obukhov, P.S. Identifikatsiya tsifrobukvennoy informatsii s bagazhnoy birki na osnove neyronnoy seti/] // Dynamics of technical systems (Rostov-on-Don, September 11-13, 2019). – Rostov-on-Don, 2019, pp. 65-68.
  2. Tensorflow documentation [Электронный ресурс]. Режим доступа – https://www.tensorflow.org/guide/keras?hl=ru
  3. Ren, S. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks/ S. Ren, K. He, R. Girshick, J. Sun // Advances in Neural Information Processing Systems. Vol. 39. 2015, P.1137–1149.
  4. Szegedy, C. Rethinking the inception architecture for computer vision/ C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna// IEEE Conference on Computer Vision and Pattern Recognition. Vol.1. 2016, P.2818–2826.
  5. Liu, W. SSD: Single shot multibox detector/ D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.Y. Fu, A.C. Berg // European Conference on Computer Vision 2016. Vol. 1. 2016, P.21–37.
  6. Howard A.G. Mobilenets: Efficient convolutional neural networks for mobile vision applications./ A.G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam // arXiv 2017, arXiv:1704.04861.
  7. Huang J. Speed/accuracy trade-offs for modern convolutional object detectors / J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama // 30th IEEE Conference on Computer Vision and Pattern Recognition. arXiv 2017, arXiv:1611.10012v3.
  8. Tesseract wiki [Электронный ресурс]. Режим доступа – https://github.com/tesseract-ocr/tesseract/wiki