Detection and determination of the exact location of the fire centre using a convolutional neural network, panoramic image and 3D model of the observed object
https://doi.org/10.22227/0869-7493.2024.33.04.13-21
Abstract
Introduction. When solving the problem of ensuring fire safety at large industrial facilities, it is important to ensure the highest speed of response to emerging threats. This paper discusses a new method of detecting and determining the exact location of the fire centre in real time, based on modern methods of image processing and artificial intelligence.
Aims and Objectives. The aim of the work is to create a system capable of detecting fire in a panoramic image and, based on a 3D model, determining the coordinates of the detected threat.
Objectives of the work:
- CNN training and its adaptation to work in a panoramic image;
- development of an algorithm for determining the spatial coordinates of an object found in the image.
Methods. The paper describes the scheme of the proposed system. Methods for detecting fires in the image are discussed. The choice of the approach using a convolutional neural network is justified. The application of a neural network in a panoramic image is considered and an approach to straightening distortions in the image is described in order to improve the accuracy of the network. A method for combining a 3D model with a panoramic image and determining the spatial coordinates of found fires is described.
Results and Discussion. The work shows the results of the system in a virtual environment where fires were generated. The environment emulates all key components of the system, such as a panoramic camera and a 3D model of the object. In the experiments carried out, the error in determining the coordinates of the fire was about 20 cm.
Conclusions. The work examined a new approach to detecting fires using computer vision. A neural network of the YOLOv5 architecture was trained, which is capable of recognizing fire and smoke. To reduce distortion, stereographic projection was used. A method was developed and applied to determine the coordinates of fire in space by combining a 3D model and a panoramic image.
About the Authors
A. A. EvsikovRussian Federation
Andrey A. EVSIKOV, Postgraduate Student
Leninskiy Avenue, 65, Bldg. 1, Moscow, 119991
RSCI AuthorID: 1211560
I. V. Samarin
Russian Federation
Ilya V. SAMARIN, Dr. Sci. (Eng.), Docent, Head of Department of Automation of Technological Processes
Leninskiy Avenue, 65, Bldg. 1, Moscow, 119991
RSCI AuthorID: 867674
References
1. Zaman T., Hasan M., Ahmed S., Ashfaq S. Fire detection using computer vision. IEEE 61st International Midwest Symposium on Circuits and Systems (MWSCAS). 2018; 356-359. DOI: 10.1109/MWSCAS.2018.8623842
2. Manjunatha K., Mohana H., Vijaya P. Implementation of computer vision based industrial fire safety automation by using neuro-fuzzy algorithms. I.J. Information Technology and Computer Science. 2015; 4:14-27. DOI: 10.5815/ijitcs.2015.04.02
3. Qi X., Ebert J. A computer vision-based method for fire detection in color videos. International Journal of Imaging. 2009; 2(9):22-34.
4. Ba Hala A.M.A. Fire detection on earth’s surface images in the LAB color model. Economika. Informatsionnyye tehnologii/Economics. Information technologies. 2021; 48(4): 831-842. DOI: 10.52575/2687-0932-2021-48-4-831-842 (rus).
5. Celik T., Hasan D. Fire detection in video sequences using a generic color model. Fire Safety Journal. 2009; 44(2):147-158. DOI: 10.1016/j.firesaf.2008.05.005
6. Marbach G., Markus L., Thomas B. An image processing technique for fire detection in video images. Fire safety journal. 2006; 41(4):285-289. DOI: 10.1016/j.firesaf.2006.02.001
7. Zhang Q., Xu J., Xu L., Guo H. Deep convolutional neural networks for forest fire detection. Proceedings of the 2016 International Forum on Management, Education and Information Technology Application. 2016; 568-575. DOI: 10.2991/ifmeita-16.2016.105
8. Frizzi S., Kaabi R., Bouchouicha M., Ginoux J.M., Moreau E., Fnaiech F. Convolutional neural network for video fire and smoke detection. IECON 2016 — 42nd Annual Conference of the IEEE Industrial Electronics Society. IEEE. 2016; 877-882. DOI: 10.1109/iecon.2016.7793196
9. Muhammad K., Ahmad J., Mehmood I., Rho S., Baik S.W. Convolutional neural networks based fire detection in surveillance videos. IEEE Access. 2018; 6:18174-18183. DOI: 10.1109/access.2018.2812835
10. Li M., Zhang Y., Mu L., Xin J., Yu Z., Jiao S. et al. A real-time fire segmentation method based on a deep learning approach. IFAC-PapersOnLine. 2022; 55(6):145-150. DOI: 10.1016/j.ifacol.2022.07.120
11. Girshick R., Donahue J., Darrell T., Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. IEEE Conference on Computer Vision and Pattern Recognition. 2014; 580-587. DOI: 10.1109/CVPR.2014.81
12. He K., Gkioxari G., Dollar P., Girshick R. Mask R-CNN. IEEE International Conference on Computer Vision (ICCV). 2017; 2980-2988. DOI: 10.1109/ICCV.2017.322
13. Lin T., Dollar P., Girshick R., He K., Hariharan B., Belongie S. Feature pyramid networks for object detection. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017; 936-944. DOI: 10.1109/CVPR.2017.106
14. Redmon J., Divvala S., Girshick R., Farhadi A. You only look once: unified, real-time object detection. CVPR. 2016; 779-788. DOI: 10.1109/CVPR.2016.91
15. Liu W., Anguelov D., Erhan D., Szegedy Ch., Reed S., Fu Ch.-Ya. еt al. SSD: single shot multibox detector. Computer Vision — ECCV Lecture Notes in Computer Science. 2016; 9905. DOI: 10.1007/978-3-319-46448-0_2
16. Lin T., Goyal P., Girshick R., He K., Dollar P. Focal loss for dense object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2020; 42(2):318-327. DOI: 10.1109/TPAMI.2018.2858826
17. Wang C., Bochkovskiy A., Liao H. Scaled-YOLOv4: Scaling cross stage partial network. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021; 13024-13033. DOI: 10.1109/CVPR46437.2021.01283
18. Evsikov A.A., Samarin I.V. Detection of fires at technological facilities using convolutional neural network. Pozharovzryvobezopasnost/Fire and Explosion Safety. 2023; 32(5):40-48. DOI: 10.22227/0869-7493.2023.32.05.40-48 (rus).
19. Yang W., Qian Y., Kämäräinen J.-K., Cricri F., Fan L. Object detection in equirectangular panorama. 24th International Conference on Pattern Recognition (ICPR). Beijing, China, 2018; 2190-2195. DOI: 10.1109/ICPR.2018.8546070
20. Deng F., Zhu X., Ren J. Object detection on panoramic images based on deep learning. 3rd International Conference on Control, Automation and Robotics (ICCAR). Nagoya, Japan, 2017; 375-380. DOI: 10.1109/ICCAR.2017.7942721
21. Evsikov A.A., Samarin I.V. Fire recognition in panoramic images using convolutional neural network. Avtomatizatsiya i informatizatsiya TEK/Automation and informatization of the fuel and energy complex. 2023; 12(605):5-10. DOI: 10.33285/2782-604X-2023-12(605)-5-10 (rus).
22. Krylov V.A., Kenny E., Dahyot R. Automatic discovery and geotagging of objects from street view imagery. Remote Sensing. 2018; 10(5). DOI: 10.3390/rs10050661
23. Babahajiani P., Fan L., Kämäräinen J.K. Urban 3D segmentation and modelling from street view images and LiDAR point clouds. Machine Vision and Applications. 2017; 28:679-694. DOI: 10.1007/s00138-017-0845-3
Review
For citations:
Evsikov A.A., Samarin I.V. Detection and determination of the exact location of the fire centre using a convolutional neural network, panoramic image and 3D model of the observed object. Pozharovzryvobezopasnost/Fire and Explosion Safety. 2024;33(4):13-21. (In Russ.) https://doi.org/10.22227/0869-7493.2024.33.04.13-21