Skip to main content
Log in

Study on predicting sentiment from images using categorical and sentimental keyword-based image retrieval

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Visual stimuli are the most sensitive stimulus to affect human sentiments. Many researches have attempted to find the relationship between visual elements in images and sentimental elements using statistical approaches. In many cases, the range of sentiment that affects humans varies with image categories, such as landscapes, portraits, sports, and still life. Therefore, to enhance the performance of sentiment prediction, an individual prediction model must be established for each image category. However, collecting much ground truth sentiment data is one of the obstacles encountered by studies on this field. In this paper, we propose an approach that acquires a training data set for category classification and predicting sentiments from images. Using this approach, we collect a training data set and establish a predictor for sentiments from images. First, we estimate the image category from a given image, and then we predict the sentiment as coordinates on the arousal–valence space using the predictor of an estimated category. We show that the performance of our approach approximates performance using ground truth data. Based on our experiments, we argue that our approach, which utilizes big data on the web as the training set for predicting content sentiment, is useful for practical purposes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Aurenhammer F (1991) Voronoi diagrams: a survey of a fundamental geometric data structure. ACM Comput Surv (CSUR) 23(3):345–405

    Article  Google Scholar 

  2. Cantone D, Ferro A, Pulvirenti A, Recupero D, Shasha D (2005) Antipole tree indexing to support range search and k-nearest neighbor search in metric spaces. Knowl Data Eng IEEE Trans 17(4):535–550. doi:10.1109/TKDE.2005.53

    Article  Google Scholar 

  3. Cohen-Or D, Sorkine O, Gal R, Leyvand T, Xu YQ (2006) Color harmonization. In: ACM transactions on graphics (TOG), ACM, vol 25, pp 624–630

  4. Colombo C, Del Bimbo A, Pala P (1999) Semantics in visual information retrieval. IEEE Multimed 6(3):38–53

    Article  Google Scholar 

  5. Dan-Glauser E, Scherer K (2011) The geneva affective picture database (gaped): a new 730-picture database focusing on valence and normative significance. Behav Res Methods 43(2):468–477. doi:10.3758/s13428-011-0064-1

    Article  Google Scholar 

  6. Datta R, Li J, Wang JZ (2008) Algorithmic inferencing of aesthetics and emotion in natural images: an exposition. In: Image processing, 2008. ICIP 2008. 15th IEEE international conference on, IEEE, pp 105–108

  7. Fasel B, Luettin J (2003) Automatic facial expression analysis: a survey. Pattern Recognit 36(1):259–275

    Article  MATH  Google Scholar 

  8. Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH (2009) The weka data mining software: an update. SIGKDD Explor Newsl 11(1):10–18. doi:10.1145/1656274.1656278

    Article  Google Scholar 

  9. Haralick RM (1979) Statistical and structural approaches to texture. Proc IEEE 67(5):786–804

    Article  Google Scholar 

  10. Haralick RM, Shanmugam K, Dinstein IH (1973) Textural features for image classification. Syst Man Cybern IEEE Trans 6:610–621

    Article  Google Scholar 

  11. Ipeirotis PG (2010) Analyzing the amazon mechanical turk marketplace. XRDS 17(2):16–21. doi:10.1145/1869086.1869094

    Article  Google Scholar 

  12. Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1259

    Article  Google Scholar 

  13. Kobayashi S (1981) The aim and method of the color image scale. Color Res Appl 6(2):93–107

    Article  Google Scholar 

  14. Lang PJ, Bradley MM, Cuthbert BN (1999) International affective picture system (iaps): technical manual and affective ratings

  15. Machajdik J, Hanbury A (2010) Affective image classification using features inspired by psychology and art theory. In: Proceedings of the international conference on multimedia, ACM, New York, NY, USA, MM ’10, pp 83–92. doi:10.1145/1873951.1873965

  16. Mayer JD, DiPaolo M, Salovey P (1990) Perceiving affective content in ambiguous visual stimuli: a component of emotional intelligence. J Personal Assess 54(3–4):772–781

    Article  Google Scholar 

  17. Moon P, Spencer DE (1944) Geometric formulation of classical color harmony. JOSA 34(1):46–50

    Article  Google Scholar 

  18. Nicolaou MA, Gunes H, Pantic M (2012) Output-associative rvm regression for dimensional and continuous emotion prediction. Image Vis Comput 30(3):186–196

    Article  Google Scholar 

  19. Park BJ, Jang EH, Chung MA, Kim SH (2013) Design of prototype-based emotion recognizer using physiological signals. ETRI J 35(5):869–879

    Article  Google Scholar 

  20. Picard RW (1997) Affective computing. MIT Press, Cambridge

  21. Picard RW (2003) Affective computing: challenges. Int J Hum Comput Stud 59(1):55–64

    Article  Google Scholar 

  22. Picard RW, Vyzas E, Healey J (2001) Toward machine emotional intelligence: analysis of affective physiological state. Pattern Anal Mach Intell IEEE Trans 23(10):1175–1191

    Article  Google Scholar 

  23. Reiman EM, Lane RD, Ahern GL, Schwartz GE, Davidson RJ, Friston KJ, Yun LS, Chen K (1997) Neuroanatomical correlates of externally and internally generated human emotion. Am J Psychiatr 154(7):918–925

    Article  Google Scholar 

  24. Russell JA (1989) Measures of emotion. Academic Press, London

  25. Schmidt S, Stock WG (2009) Collective indexing of emotions in images. A study in emotional information retrieval. J Am Soc Inf Sci Technol 60(5):863–876

    Article  Google Scholar 

  26. Schubert E (1999) Measuring emotion continuously: validity and reliability of the two-dimensional emotion-space. Aust J Psychol 51(3):154–165

    Article  Google Scholar 

  27. Suykens JA, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300

    Article  MathSciNet  MATH  Google Scholar 

  28. Tokumaru M, Muranaka N, Imanishi S (2002) Color design support system considering color harmony. In: Fuzzy Systems, 2002. FUZZ-IEEE’02. Proceedings of the 2002 IEEE international conference on, IEEE, vol 1, pp 378–383

  29. Warriner AB, Kuperman V, Brysbaert M (2013) Norms of valence, arousal, and dominance for 13,915 english lemmas. Behav Res Methods 45(4):1191–1207

    Article  Google Scholar 

  30. Yang YH, Lin YC, Su YF, Chen HH (2008) A regression approach to music emotion recognition. Audio Speech Lang Process IEEE Trans 16(2):448–457

    Article  Google Scholar 

  31. Yun Wh, Kim D, Park C, Kim J (2013) Hybrid facial representations for emotion recognition. ETRI J 35(6):1021–1028

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by Ministry of Culture, Sports and Tourism (MCST) and Korea Creative Content Agency (KOCCA) in the Culture Technology (CT) and Research Development Program 2015.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dongwann Kang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Seo, S., Kang, D. Study on predicting sentiment from images using categorical and sentimental keyword-based image retrieval. J Supercomput 72, 3478–3488 (2016). https://doi.org/10.1007/s11227-015-1510-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-015-1510-0

Keywords

Navigation

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy