Skip to main content

Domain Adaptive Object Detection with Dehazing Module

  • Conference paper
  • First Online:
Advanced Intelligent Computing Technology and Applications (ICIC 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14872))

Included in the following conference series:

  • 512 Accesses

Abstract

In foggy object detection tasks, the presence of airborne particles reduces imaging clarity, resulting in a significant decrease in detection accuracy. Existing fog removal networks lack evaluation metrics for high-level tasks, and connecting the fog removal network limits the adaptability of the object detection network. To address these issues, this paper proposes training the fog removal network with a perceptual-loss approach involving the object detection network. This approach aims to enhance the accuracy of the fog removal network in advanced tasks and overcome the constraints of quantitative evaluation indexes like PSNR. We compare the results of training DefogNet with perceptual loss and pixel-level loss, and obtain the best results in terms of PSNR and SSM indices using both losses. Although the object detection network connected to the dehazing network can handle detection task in foggy scenes, its accuracy decreases in such scenarios. For this reason, we propose the DefogDA-FasterRCNN network, which incorporates domain adaptation into the integrated network, and makes the object detection module domain-adaptive for both foggy and non-foggy domains that pass through the dehazing module. Foggy images will obtain clearer features through the fog removal network and the negative impact of foggy images through the fog removal network will be weakened by the domain adaptation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Li, W., Li, F., Luo, Y., Wang, P., et al.: Deep domain adaptive object detection: a survey. In: SSCI, pp. 1808–1813. IEEE (2020)

    Google Scholar 

  2. Cai, B., Xu, X., Jia, K., Qing, C., Tao, D.: DehazeNet: an end-to-end system for single image haze removal. IEEE TIP 25(11), 5187–5198 (2016)

    MathSciNet  Google Scholar 

  3. Li, B., Peng, X., Wang, Z., Xu, J., Feng, D.: An all-in-one network for dehazing and beyond. arXiv e-prints arXiv:1707.06543, July 2017

  4. Ren, W., et al.: Gated fusion network for single image dehazing. In: CVPR, pp. 3253–3261. IEEE (2018)

    Google Scholar 

  5. Li, B., et al.: Benchmarking single image dehazing and beyond. IEEE TIP 28, 492–505 (2018)

    Google Scholar 

  6. Chen, Y., Li, W., Sakaridis, C., Dai, D., Van Gool, L.: Domain adaptive faster R-CNN for object detection in the wild. In: CVPR, pp. 3339–3348. IEEE (2018)

    Google Scholar 

  7. Zhu, X., Pang, J., Yang, C., Shi, J., Lin, D.: Adapting object detectors via selective cross-domain alignment. In: CVPR, pp. 687–696. IEEE (2019)

    Google Scholar 

  8. Shan, Y., Lu, W.F., Chew, C.M.: Pixel and feature level based domain adaptation for object detection in autonomous driving. Neurocomputing 367, 31–38 (2019)

    Article  Google Scholar 

  9. Rodriguez, A.L., Mikolajczyk, K.: Domain adaptation for object detection via style consistency. arXiv preprint arXiv:1911.10033 (2019)

  10. Zou, X., Liu, Y., Tan, Z.F., Tu, C.S., Zhang, H.: A fog-removing treatment based on combining high-frequency emphasis filtering and histogram equalization. KEM 474, 2198–2202 (2011)

    Google Scholar 

  11. Soni, B., Mathur, P.: An improved image dehazing technique using CLAHE and guided filter. In: SPIN, pp. 902–907 (2020)

    Google Scholar 

  12. He, Z., Patel, V.M.: Densely connected pyramid dehazing network. In: CVPR, pp. 3194–3203. IEEE (2018)

    Google Scholar 

  13. Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: CVPR, vol. 1, pp. 1–11. IEEE (2001)

    Google Scholar 

  14. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR, vol. 1, pp. 886–893. IEEE (2005)

    Google Scholar 

  15. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Region-based convolutional networks for accurate object detection and segmentation. IEEE TPAMI 38(1), 142–158 (2015)

    Article  Google Scholar 

  16. Girshick, R.: Fast R-CNN. In: ICCV, pp. 1440–1448. IEEE (2015)

    Google Scholar 

  17. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NeurIPS, vol. 28, pp. 91–99 (2015)

    Google Scholar 

  18. Lin, T.Y., Dollar, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: CVPR, pp. 2117–2125. IEEE (2017)

    Google Scholar 

  19. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: CVPR, pp. 779–788. IEEE (2016)

    Google Scholar 

  20. Redmon, J., Farhadi, A.: Yolo9000: better, faster, stronger. In: CVPR, pp. 7263–7271. IEEE (2017)

    Google Scholar 

  21. Liu, W.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  22. Khodabandeh, M., Vahdat, A., Ranjbar, M., Macready, W.G.: A robust learning approach to domain adaptive object detection. In: ICCV, pp. 480–490. IEEE (2019)

    Google Scholar 

  23. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV, pp. 2223–2232. IEEE (2017)

    Google Scholar 

  24. Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: CVPR, pp. 3213–3223. IEEE (2016)

    Google Scholar 

  25. Sakaridis, C., Dai, D., Van Gool, L.: Semantic foggy scene understanding with synthetic data. IJCV 126(9), 973–992 (2018)

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the Natural Science Foundation of Tianjin (No. 21ICYBJC00640) and by the 2023 CCF-Baidu Songguo Foundation (Research on Scene Text Recognition Based on PaddlePaddle).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Di Sun .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pan, G. et al. (2024). Domain Adaptive Object Detection with Dehazing Module. In: Huang, DS., Pan, Y., Zhang, Q. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2024. Lecture Notes in Computer Science, vol 14872. Springer, Singapore. https://doi.org/10.1007/978-981-97-5612-4_7

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-5612-4_7

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-5611-7

  • Online ISBN: 978-981-97-5612-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy