Skip to main content

Co-saliency Detection Based on Superpixel Clustering

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10412))

Abstract

The exiting co-saliency detection methods achieve poor performance in computation speed and accuracy. Therefore, we propose a superpixel clustering based co-saliency detection method. The proposed method consists of three parts: multi-scale visual saliency map, weak co-saliency map and fusing stage. Multi-scale visual saliency map is generated by multi-scale superpixel pyramid with content-sensitive. Weak co-saliency map is computed by superpixel clustering feature space with RGB and CIELab color features as well as Gabor texture feature in order to the representation of global correlation. Lastly, a final strong co-saliency map is obtained by fusing the multi-scale visual saliency map and weak co-saliency map based on three kinds of metrics (contrast, position and repetition). The experiment results in the public datasets show that the proposed method improves the computation speed and the performance of co-saliency detection. A better and less time-consuming co-saliency map is obtained by comparing with other state-of-the-art co-saliency detection methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Cheng, M.M., Mitra, N.J., Huang, X., Hu, S.: SalientShape: group saliency in image collections. Vis. Comput. 30(4), 1–10 (2014)

    Article  Google Scholar 

  2. Mukherjee, L., Singh, V., Peng, J.: Scale invariant cosegmentation for image groups. In: Computer Vision and Pattern Recognition, pp. 1881–1888. IEEE, Piscataway (2011)

    Google Scholar 

  3. Chang, K.Y., Liu, T.L., Lai, S.H.: From co-saliency to co-segmentation: an efficient and fully unsupervised energy minimization model. In: Computer Vision and Pattern Recognition, pp. 2011–2136. IEEE, Piscataway (2011)

    Google Scholar 

  4. Zund, F., Pritch, Y., Sorkine-Hornung, A., et al.: Content-aware compression using saliency-driven image retargeting. In: International Conference on Computer Vision, pp. 1845–1849. IEEE, Piscataway (2013)

    Google Scholar 

  5. Jerripothula, K.R., Cai, J., Yuan, J.: CATS: co-saliency activated tracklet selection for video co-localization. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9911, pp. 187–202. Springer, Cham (2016). doi:10.1007/978-3-319-46478-7_12

    Google Scholar 

  6. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. Trans. Patt. Anal. Mach. Intell. 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  7. Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Neural Information Processing Systems, pp. 545–552. Curran Associates, Inc., New York (2006)

    Google Scholar 

  8. Tong, N., Lu, H., Ruan, X., et al.: Salient object detection via bootstrap learning. In: Computer Vision and Pattern Recognition, pp. 1884–1892. IEEE, Piscataway (2015)

    Google Scholar 

  9. Lee, G., Tai, Y.W., Kim, J.: Deep saliency with encoded low level distance map and high level features. In: Computer Vision and Pattern Recognition, pp. 660–668. IEEE, Piscataway (2016)

    Google Scholar 

  10. Jacobs, D.E., Goldman, D.B., Shechtman, E.: Cosaliency: Where people look when comparing images. In: ACM Symposium on User Interface Software and Technology, pp. 219–228. ACM, New York (2010)

    Google Scholar 

  11. Li, H., Meng, F., Ngan, K.N.: Co-salient object detection from multiple images. Trans. Multimedia 15(8), 1896–1909 (2013)

    Article  Google Scholar 

  12. Fu, H., Cao, X., Tu, Z.: Cluster-based co-saliency detection. Trans. Image Process. 22(10), 3766–3778 (2013)

    Article  MathSciNet  Google Scholar 

  13. Li, L., Liu, Z., Zou, W., et al.: Co-saliency detection based on region-level fusion and pixel-level refinement. In: International Conference on Multimedia and Expo, pp. 1–6. IEEE, Los Alamitos (2014)

    Google Scholar 

  14. Li, Y., Fu, K., Liu, Z., Yang, J.: Efficient saliency-model-guided visual co-saliency detection. Sig. Process. Lett. 22(5), 588–592 (2015)

    Article  Google Scholar 

  15. Zhang, D., Han, J., Li, C., et al.: Co-saliency detection via looking deep and wide. In: Computer Vision and Pattern Recognition, pp. 2994–3002. IEEE, Piscataway (2015)

    Google Scholar 

  16. Zhang, D., Meng, D., Li, C., et al.: A self-paced multiple-instance learning framework for co-saliency detection. In: International Conference on Computer Vision, pp. 594–602. IEEE, Piscataway (2015)

    Google Scholar 

  17. Liu, Y.J., Yu, C.C., Yu, M.J., et al.: Manifold SLIC: a fast method to compute content-sensitive superpixels. In: Computer Vision and Pattern Recognition, pp. 651–659. IEEE, Piscataway (2016)

    Google Scholar 

  18. Liu, T., Yuan, Z., Sun, J., Wang, J., Zheng, N., Tang, X., Shum, H.: Learning to detect a salient object. Trans. Patt. Anal. Mach. Intell. 33(2), 353–367 (2011)

    Article  Google Scholar 

  19. Li, Y., Hou, X., Koch, C., et al.: The secrets of salient object segmentation. In: Computer Vision and Pattern Recognition, pp. 280–287. IEEE, Piscataway (2014)

    Google Scholar 

  20. Batra, D., Kowdle, A., Parikh, D., et al.: iCoseg: Interactive co-segmentation with intelligent scribble guidance. In: Computer Vision and Pattern Recognition, pp. 3169–3176. IEEE, Piscataway (2010)

    Google Scholar 

  21. Winn, J., Criminisi, A., Minka, T.: Object categorization by learned universal visual dictionary. In: International Conference on Computer Vision Computer Vision, pp. 1800–1807. IEEE, Piscataway (2005)

    Google Scholar 

  22. Riche, N., Mancas, M., Duvinage, M., Mibulumukini, M., Gosselin, B., Dutoit, T.: Rare 2012: a multi-scale rarity-based saliency detection with its comparative statistical analysis. Sig. Process. Image Commun. 28(6), 642–658 (2013)

    Article  Google Scholar 

Download references

Acknowledgment

This work was partially supported by National Natural Science Foundation of China (NSFC Grant Nos. 61170124, 61272258, 61301299, 61272005, 61572085), Provincial Natural Science Foundation of Jiangsu (Grant Nos. BK20151254, BK20151260), Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University (Grant No. 93K172016K08), and Collaborative Innovation Center of Novel Software Technology and Industrialization.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yi Ji or Chunping Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Zhu, G., Ji, Y., Jiang, X., Xu, Z., Liu, C. (2017). Co-saliency Detection Based on Superpixel Clustering. In: Li, G., Ge, Y., Zhang, Z., Jin, Z., Blumenstein, M. (eds) Knowledge Science, Engineering and Management. KSEM 2017. Lecture Notes in Computer Science(), vol 10412. Springer, Cham. https://doi.org/10.1007/978-3-319-63558-3_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-63558-3_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-63557-6

  • Online ISBN: 978-3-319-63558-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy