Abstract
Forest fire is a severe natural disaster, which leads to the destruction of forest ecology. At present, fire detection technology represented by convolutional neural network is widely used in forest resource protection, which can realize rapid analysis. However, in forest flame and smoke detection tasks, due to continuous expansion of the target range, a better detection effect cannot be achieved. This paper proposes an improved YOLOX method for multi-scale forest fire detection. This method proposes a novel feature pyramid model to reduce the information loss of high-level forest fire feature maps and enhance the representation ability of feature pyramids. Moreover, the method applies a small object data augmentation strategy to enrich the forest fire dataset, making it more suitable for the actual forest fire scene. According to the experimental results, the mAP of the model proposed in this paper reaches 79.64%, which is about 4.89% higher than the baseline network YOLOX. The method improves the accuracy of forest fire detection, reduces false alarms, and is suitable for real scenarios of forest fires.









Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
Data are available from the authors upon request.
References
Lucas-Borja, M.E., González-Romero, J., Plaza-Álvarez, P.A., et al.: The impact of straw mulching and salvage logging on post-fire runoff and soil erosion generation under Mediterranean climate conditions. Sci. Total Environ. 654, 441–451 (2019)
Ertugrul, M., Varol, T., Ozel, H.B., et al.: Influence of climatic factor of changes in forest fire danger and fire season length in Turkey. Environ. Monit. Assess. 193, 28 (2021)
Lucas-Borja, M.E., Zema, D.A., Carrà, B.G., et al.: Short-term changes in infiltration between straw mulched and non-mulched soils after wildfire in Mediterranean forest ecosystems. Ecol. Eng. 122, 27–31 (2018)
Zhan, J., Hu, Y., Zhou, G., et al.: A high-precision forest fire smoke detection approach based on ARGNet. Comput. Electron. Agric. 196, 106874 (2022)
Habiboğlu, Y.H., Günay, O., Çetin, A.E.: Covariance matrix-based fire and flame detection method in video. Mach. Vis. Appl. 23, 1103–1113 (2012)
Lascio, R.D., Greco, A., Saggese, A., Vento, M.: Improving fire detection reliability by a combination of videoanalytics. In: Campilho, A., Kamel, M. (eds.) Image Analysis and Recognition, pp. 477–484. Springer International Publishing, Berlin (2014)
Li, Z., Mihaylova, L.S., Isupova, O., et al.: Autonomous flame detection in videos with a Dirichlet process Gaussian mixture color model. IEEE Trans. Ind. Inf. 14, 1146–1154 (2018)
Wang, Y., Dang, L., Ren, J.: Forest fire image recognition based on convolutional neural network. J. Algorithms Computat. Technol. 13, 174830261988768 (2019)
Barmpoutis, P., Dimitropoulos, K., Kaza, K., et al.: Fire detection from images using faster R-CNN and multidimensional texture analysis. In: ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8301–8305 (2019)
Kim, B., Lee, J.: A video-based fire detection using deep learning models. Appl. Sci. 9, 2862 (2019)
Zhao, E., Liu, Y., Zhang, J., et al.: Forest fire smoke recognition based on anchor box adaptive generation method. Electronics 10, 566 (2021)
Majid, S., Alenezi, F., Masood, S., et al.: Attention based CNN model for fire detection and localization in real-world images. Expert Syst. Appl. 189, 116114 (2022)
Li, P., Zhao, W.: Image fire detection algorithms based on convolutional neural networks. Case Stud. Therm. Eng. 19, 100625 (2020)
Jiao, Z., Zhang, Y., Xin, J., et al.: A deep learning based forest fire detection approach using UAV and YOLOv3. In: 2019 1st International Conference on Industrial Artificial Intelligence (IAI), pp. 1–5 (2019)
Wu, Z., Xue, R., Li, H.: Real-Time Video Fire Detection via Modified YOLOv5 Network Model. Fire Technol. 58, 2377–2403 (2022)
Singh, B., Davis, L.S.: An analysis of scale invariance in object detection snip. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3578–3587 (2018)
Redmon, J., Farhadi, A.: Yolov3: An incremental improvement. arXiv:1804.02767 (2018)
Bochkovskiy, A., Wang, C.-Y., Liao, H.-Y.M.: Yolov4: optimal speed and accuracy of object detection. arXiv:2004.10934 (2020)
Ge, Z., Liu, S., Wang, F., et al.: YOLOX: exceeding YOLO Series in 2021. arXiv:2107.08430 (2021)
Tan, M., Pang, R., Le, Q.V.: EfficientDet: scalable and efficient object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10781–10790 (2020)
Lin, T.-Y., Goyal, P., Girshick, R., et al.: Focal loss for dense object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2980–2988 (2017)
Zhou, X., Wang, D., Krähenbühl, P.: Objects as Points. arXiv:1904.07850 (2019)
Cui, Y., Yang, L., Liu, D.: Dynamic Proposals for Efficient Object Detection. arXiv:2207.05252 (2022)
Redmon, J., Farhadi, A.: YOLO9000: Better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263–7271 (2017)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)
Taylor, L., Nitschke, G.: Improving deep learning with generic data augmentation. In: 2018 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1542–1547 (2018)
Zhang, H., Wu, Q.J.: Pattern recognition by affine Legendre moment invariants. In: 2011 18th IEEE International Conference on Image Processing, pp. 797–800 (2011)
Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th international conference on machine learning, pp. 807–814 (2010)
DeVries, T., Taylor, G.W.: Improved regularization of convolutional neural networks with cutout. arXiv:1708.04552 (2017)
Zhong, Z., Zheng, L., Kang, G., et al.: Random erasing data augmentation. In: Proceedings of the AAAI conference on artificial intelligence, pp. 13001–13008 (2020)
Yun, S., Han, D., Oh, S.J., et al.: Cutmix: Regularization strategy to train strong classifiers with localizable features. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6023–6032 (2019)
Liu, D., Cui, Y., Yan, L., et al.: DenserNet: Weakly Supervised Visual Localization Using Multi-Scale Feature Aggregation. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 6101–6109 (2021)
Wang, G., Liu, Z., Sun, H., Zhu, C., Yang, Z.: Yolox-BTFPN: An anchor-free conveyor belt damage detector with a biased feature extraction network. Measurement 200, 111675 (2022)
Liu, S., Qi, L., Qin, H., et al.: Path aggregation network for instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8759–8768 (2018)
Liu, Q., Bi, J., Zhang, J., et al.: B-FPN SSD: an SSD algorithm based on a bidirectional feature fusion pyramid. Vis. Comput. 39, 6265–6277 (2023)
Lin, T.-Y., Dollár, P., Girshick, R., et al.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognitio, pp. 2117–2125 (2017)
Luo, Y., Cao, X., Zhang, J., et al.: CE-FPN: enhancing channel information for object detection. Multimed. Tools Appli. 81, 30685–30704 (2021)
Liu, S., Huang, D., Wang, Y.: Learning spatial fusion for single-shot object detection. arXiv:1911.09516 (2019)
Guo, C., Fan, B., Zhang, Q., et al.: AugFPN: improving multi-scale feature learning for object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12595–12604 (2020)
Cao, L., Xiao, Y., Xu, L.: EMface: detecting hard faces by exploring receptive field Pyraminds. arXiv:2105.10104 (2021)
Woo, S., Park, J., Lee, J.-Y., et al.: Cbam: convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV), pp. 3–19 (2018)
Rezatofighi, H., Tsoi, N., Gwak, J., et al.: Generalized intersection over union: a metric and a loss for bounding box regression. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 658–666 (2019)
Zheng, Z., Wang, P., Liu, W., et al.: Distance-IoU loss: faster and better learning for bounding box regression. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 12993–13000 (2020)
Hou, W., Jing, H.: RC-YOLOv5s: for tile surface defect detection. Vis. Comput. (2023). https://doi.org/10.1007/s00371-023-02793-2
Liu, W., Anguelov, D., Erhan, D., et al.: Ssd: single shot multibox detector. In: European conference on computer vision, pp. 21–37 (2016)
Cao, G., Xie, X., Yang, W., et al.: Feature-fused SSD: fast detection for small objects. In: Ninth International Conference on Graphic and Image Processing, pp. 381–388 (2018)
Funding
This work was supported by the National Natural Science Foundation of China (No. 61872260) and National Key Research and Development Program of China (No. 2021YFB3300503).
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Conflict of interest
All authors declare that they do not have conflict of interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wang, T., Wang, J., Wang, C. et al. Improving YOLOX network for multi-scale fire detection. Vis Comput 40, 6493–6505 (2024). https://doi.org/10.1007/s00371-023-03178-1
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-023-03178-1