Skip to main content
Log in

Incremental printing product defect detection based on contextual information

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Small object detection is a critical research area in computer vision, with broad applications in industrial defect detection and satellite remote sensing. In printing defect detection, defects on printed materials are often small, weak in detail, and low in contrast. While mainstream deep learning-based object detection algorithms perform well on conventional objects, they face significant challenges in extracting features for printing defects. Additionally, the small size and blurred visual characteristics of defects make detection results highly susceptible to background interference, leading to a high false positive rate. To address these issues, this paper proposes a progressive printing defect detection method based on contextual information (PCINet). Specifically designed for small defects with unclear visual features, PCINet enhances defect feature representation by reconfiguring the backbone network during the feature extraction phase, thereby improving detection performance. A global semantic reconstruction module is introduced to progressively explore the contextual relationships between defect targets and their surrounding environment. This module includes a global semantic awareness unit, which expands the receptive field and enriches regions of interest, and a regional interaction-assisted reconstruction unit, which refines defect edges and suppresses redundant background interference. Experimental results demonstrate that the proposed method performs well on the printing defect detection dataset, the Printing Defect Dataset 2, and the DOTA-V1.0 dataset. It significantly reduces the false positive rate and exhibits strong robustness in detecting defects under complex backgrounds and low contrast. Furthermore, PCINet shows good generalization capabilities in other small object detection tasks, underscoring its broad application potential.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data availability

Some or all data, models, or code generated or used during the study are available from the corresponding author by request.

References

  1. Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)

    Article  Google Scholar 

  2. Bochkovskiy, A., Wang, C., Liao, H.: YOLOv4: Optimal speed and accuracy of object detection [EB/OL] (2020). https://doi.org/10.48550/arXiv.2004.10934

  3. Ge, Z., Liu, S., Wang, F., et al.: YOLOX: Exceeding YOLO series in 2021[EB/OL].2107.08430 (2021)

  4. Wang, C.Y., Yeh, I., Liao, H.: You only learn one representation: Unified network for multiple tasks[J]. J. Inform. Sci. Eng. 39(3), 691–709 (2023)

    Google Scholar 

  5. Zhai, S., Shang, D., Wang, S., et al.: DF-SSD: An improved SSD object detection algorithm based on densenet and feature fusion[J]. IEEE Access. 8, 24344–24357 (2020)

    Article  Google Scholar 

  6. Cao, J., Cholakkal, H., Anwer, R., et al.: D2DET: Towards high quality object detection and instance segmentation[C]. In: Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11482–11491 (2020)

  7. Cai, Z., Vasconcelos, N., Cascade, R-C.N.N.: Delving into high quality object detection[C]. In: Proceedings of the 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6154–6162 (2018)

  8. Rukhovich, D., Sofiiuk, K., Galeev, D., et al.: IterDet: Iterative scheme for object detection in crowded environments[C]. In: Proceedings of the Joint IAPR International Workshops on Structural, Syntactic and Statistical Techniques in Pattern Recognition, pp. 344–354 (2021)

  9. Sun, P., Zhang, R., Jiang, Y., et al.: Sparse R-CNN: End-to-end object detection with learnable proposals[C]. In: Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14449–14458 (2021)

  10. Duan, K., Bai, S., Xie, L., et al.: CenterNet: Keypoint triplets for object detection[C]. In: Proceedings of the 17th IEEE/CVF International Conference on Computer Vision, pp. 6568–6577 (2019)

  11. Lim, J., Astrid, M., Yoon, H., et al.: Small object detection using context and attention[C]. In: Proceedings of the 3rd International Conference on Artificial Intelligence in Information and Communication, pp. 181–186 (2021)

  12. Yang, X., Yang, J., Yan, J., et al.: SCRDet: Towards more robust detection for small, cluttered and rotated objects[C]. In: Proceedings of the 17th IEEE/CVF International Conference on Computer Vision, pp. 8231–8240 (2019)

  13. Xu, Y., Fu, M., Wang, Q., et al.: Gliding vertex on the horizontal bounding box for multi-oriented object detection[J]. IEEE Trans. Pattern Anal. Mach. Intell. 43(4), 1452–1459 (2021)

    Article  Google Scholar 

  14. Fu, K., Chang, Z., Zhang, Y., et al.: Rotation-aware and multi-scale convolutional neural network for object detection in remote sensing images[J]. ISPRS J. Photogrammetry Remote Sens. 161, 294–308 (2020)

    Article  Google Scholar 

  15. Qin, R., Liu, Q., Gao, G., et al.: MRDet: A multihead network for accurate rotated object detection in aerial images[J]. IEEE Trans. Geosci. Remote Sens. 60 (2022)

  16. Yang, X., Yan, J., Feng, Z., et al.: R3Det: Refined single-stage detector with feature refinement for rotating object[C]. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence, pp. 3163–3171 (2021)

  17. Dai, L., Liu, H., Tang, H., et al.: AO2-DETR: Arbitrary-oriented object detection transformer[J]. IEEE Trans. Circuits Syst. Video Technol. 33(5), 2342–2356 (2023)

    Article  Google Scholar 

  18. Han, J., Ding, J., Li, J., et al.: Align deep features for oriented object detection[J]. IEEE Trans. Geosci. Remote Sens. 60 (2022)

  19. Han, J., Ding, J., Xue, N., et al.: ReDeT: A rotation-equivariant detector for aerial object detection[C]. In: Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2785–2794 (2021)

  20. Yang, X., Yan, J., Ming, Q., et al.: Rethinking rotated object detection with gaussian wasserstein distance loss[C]. In: Proceedings of the 38th International Conference on Machine Learning, pp. 11830–11841 (2021)

  21. Liang, D., Geng, Q., Wei, Z., et al.: Anchor retouching via model interaction for robust object detection in aerial images[J]. IEEE Trans. Geosci. Remote Sens. 60 (2022)

  22. Cheng, G., Yao, Y., Li, S., et al.: Dual-aligned oriented detector[J]. IEEE Trans. Geosci. Remote Sens. 60 (2022)

  23. Yang, X., Yang, X., Yang, J., et al.: Learning high-precision bounding box for rotated object detection via kullback-leibler divergence[C]. In: Proceedings of the 35th Conference on Neural Information Processing Systems, pp. 18381–18394 (2021)

  24. Cheng, G., Wang, J., Li, K., et al.: Anchor-free oriented proposal generator for object detection[J]. IEEE Trans. Geosci. Remote Sens. 60 (2022)

  25. Xie, X., Cheng, G., Wang, J., et al.: Oriented R-CNN for object detection[C]. In: Proceedings of the 18th IEEE/CVF International Conference on Computer Vision, pp. 3500–3509 (2021)

  26. Yang, X., Zhou, Y., Zhang, G., et al.: The KFIOU loss for rotated object detection[EB/OL] (2022). https://doi.org/10.48550/arXiv.2201.12558

  27. Wang, D., Zhang, Q., Xu, Y., et al.: Advancing plain vision transformer toward remote sensing foundation model[J]. IEEE Trans. Geosci. Remote Sens. 61 (2023)

  28. Chen, Q., Wang, Y., Yang, T., et al.: You only look one-level feature[C]. In: Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13034–13043 (2021)

  29. Zheng, S., Lu, J., Zhao, H., et al.: Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers[C]. In: Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6877–6886 (2021)

  30. Deng, S., Li, S., Xie, K., et al.: A global-local self-adaptive network for drone-view object detection[J]. IEEE Trans. Image Process. 30, 1556–1569 (2021)

    Article  MathSciNet  Google Scholar 

  31. Zhao, Z., Du, J., Li, C., et al.: Dense tiny object detection: A scene context guided approach and a unified benchmark[J]. IEEE Trans. Geosci. Remote Sens. 62, 1–13 (2024)

    Google Scholar 

  32. Datcu, M., Huang, Z., Anghel, A., et al.: Explainable, physics-aware, trustworthy artificial intelligence: A paradigm shift for synthetic aperture radar[J]. IEEE Geoscience Remote Sens. Magazine. 11(1), 8–25 (2023)

    Article  Google Scholar 

  33. Huang, Z., Yao, X., Liu, Y., et al.: Physics inspired hybrid attention for SAR target recognition[J]. ISPRS J. Photogrammetry Remote Sens. 207, 164–174 (2024)

    Article  Google Scholar 

  34. Huang, Z., Yao, X., Liu, Y., et al.: Physically explainable CNN for SAR image classification[J]. ISPRS J. Photogrammetry Remote Sens. 190, 25–37 (2022)

    Article  Google Scholar 

  35. Li, X., Wang, W., Hu, X., et al.: Selective kernel networks[C]. In: Proceedings of the 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 510–519 (2019)

  36. Cao, Y., Xu, J., Lin, S., et al.: Global context networks[J]. IEEE Trans. Pattern Anal. Mach. Intell. 45(6), 6881–6895 (2023)

    Article  Google Scholar 

  37. Chen, W., Zheng, Y., Liao, K., et al.: Small target detection algorithm for printing defects detection based on context structure perception and multi-scale feature fusion[J]. Signal. Image Video Process. 18(1), 657–667 (2024)

    Article  Google Scholar 

  38. Dong, X., Shi, P., Liang, T., et al.: CTAFFNet: CNN–transformer adaptive feature fusion object detection algorithm for complex traffic scenarios[J]. Transp. Res. Rec., 03611981241258753 (2024)

  39. Dong, X., Shi, P., Qi, H., et al.: TS-BEV: BEV object detection algorithm based on temporal-spatial feature fusion[J]. Displays. 84, 102814 (2024)

    Article  Google Scholar 

  40. Ul Amin, S., Kim, B., Jung, Y., et al.: Video anomaly detection utilizing efficient Spatiotemporal Feature Fusion with 3D convolutions and long short-term memory Modules[J]. Adv. Intell. Syst., 2300706 (2024)

  41. Amin, S.U., Hussain, A., Kim, B., et al.: Deep learning based active learning technique for data annotation and improve the overall performance of classification models[J]. Expert Syst. Appl. 228, 120391 (2023)

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China Project No.62076200, the Key R&D project of Shanxi Province under Grant 2023-YBGY-087, 2022ZDLGY01-03, Science Technology Project of Weinan 2021ZDYF-GYCX-150, and the Natural Science Basic Research Program of Shaanxi Grant (No. 2024JC-YBMS-343).

Author information

Authors and Affiliations

Authors

Contributions

Zheng, Yang, and Chen wrote the main manuscript, and the other authors reviewed it.

Corresponding author

Correspondence to Yuanlin Zheng.

Ethics declarations

Competing interests

The authors declare no competing interests.

Conflict of interest

This paper does not contain any studies with human or animal subjects and all authors declare that they have no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zheng, Y., Yang, F., Chen, W. et al. Incremental printing product defect detection based on contextual information. SIViP 19, 317 (2025). https://doi.org/10.1007/s11760-025-03948-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11760-025-03948-5

Keywords

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy