Abstract
Intangible cultural heritage (ICH) is a relatively recent term coined to represent living cultural expressions and practices, which are recognised by communities as distinct aspects of identity. The safeguarding of ICH has become a topic of international concern primarily through the work of United Nations Educational, Scientific and Cultural Organization (UNESCO). However, little research has been done on the role of new technologies in the preservation and transmission of intangible heritage. This chapter examines resources, projects and technologies providing access to ICH and identifies gaps and constraints. It draws on research conducted within the scope of the collaborative research project, i-Treasures. In doing so, it covers the state of the art in technologies that could be employed for access, capture and analysis of ICH in order to highlight how specific new technologies can contribute to the transmission and safeguarding of ICH.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
F. Cameron, S. Kenderdine, Theorizing Digital Cultural Heritage: A Critical Discourse (MIT Press, Cambridge, 2010)
M. Ioannides, D. Fellner, A. Georgopoulos, D. Hadjimitsis (ed.), Digital Heritage, in 3rd International Conference, Euromed 2010, Lemnos, Cyprus, Proceedings (Springer, Berlin, 2010)
K. Dimitropoulos, S. Manitsaris, F. Tsalakanidou, S. Nikolopoulos, B. Denby, S. Al Kork, L. Crevier-Buchman, C. Pillot-Loiseau, S. Dupont, J. Tilmanne, M. Ott, M. Alivizatou, E. Yilmaz, L. Hadjileontiadis, V. Charisis, O. Deroo, A. Manitsaris, I. Kompatsiaris, N. Grammalidis, Capturing the intangible: an introduction to the i-treasures project, in Proceedings of 9th International Conference on Computer Vision Theory and Applications (VISAPP2014), Lisbon, 5ā8 Jan 2014
N. Aikawa, An historical overview of the preparation of the UNESCO international convention for the safeguarding of intangible heritage. Museum Int. 56, 137ā149 (2004)
V. Hafstein, Intangible heritage as list: from masterpieces to representation, in Intangible heritage, ed. by L. Smith, N. Akagawa (Routledge, Abingdon, 2009), pp. 93ā111
P. Nas, Masterpieces of oral and intangible heritage: reflections on the UNESCO world heritage list. Curr. Anthropol. 43(1), 139ā143 (2002)
M. Alivizatou, The UNESCO programme for the proclamation of masterpieces of the oral and intangible heritage of humanity: a critical examination. J. Museum Ethnogr. 19, 34ā42 (2007)
L. Bolton, Unfolding the Moon: Enacting Womenās Kastom in Vanuatu (University of Hawaiāi Press, Honolulu, 2003)
K. Huffman, The fieldworkers of the Vanuatu cultural centre and their contribution to the audiovisual collections, in Arts of Vanuatu, ed. by J. Bonnemaison, K. Huffman, D. Tryon (University of Hawaiāi Press, Honolulu, 1996), pp. 290ā293
S. Zafeiriou, L. Yin, 3D facial behaviour analysis and understanding. Image Vis. Comput. 30, 681ā682 (2012)
P. Ekman, R. Levenson, W. Friesen, Emotions differ in autonomic nervous system activity. Science 221, 1208ā1210 (1983)
O. Engwall, Modeling of the vocal tract in three dimensions, in Proceedings, Eu-rospeech99, Hungary, 1999, pp. 113116
S.Fels, J.E. Lloyd, K. Van Den Doel, F. Vogt, I. Stavness, E. Vatikiotis-Bateson, Developing physically-based, dynamic vocal tract models using Artisynth, in Proceedings of ISSP 6, 1991, pp. 419ā426
M. Stone, Toward a model of three-dimensional tongue movement. Phonetics 19, 309320 (1991)
P. Badin, G. Bailly, L. Reveret, M. Baciu, C. Segebarth, C. Savariaux, Three-dimensional linear articulatory modeling of tongue, lips and face, based on MRI and video images. J. Phon. 30(3), 533ā553 (2002)
M. Stone, A three-dimensional model of tongue movement based on ultrasound and X-ray microbeam data. J. Acoust. Soc. Am. 87, 2207 (1990)
O. Engwall, From real-time MRI to 3D tongue movements, in Proceedings, 8th International Conference on Spoken Language Processing (ICSLP), Jeju Island, Vol. 2, 2004, pp. 1109ā1112
M. Stone, A. Lundberg, Three-dimensional tongue surface shapes of English consonants and vowels. J. Acoust. Soc. Am. 99(6), 37283737 (1996)
N. Henrich, B. Lortat-Jacob, M. Castellengo, L. Bailly, X Pelorson, Period-doubling occurences in singing: the ābassuā case in traditional Sardinian āA Tenoreā singing, in Proceedings of the International Conference on Voice Physiology and Biomechanics, Tokyo, July 2006
N. Henrich, L. Bailly, X. Pelorson, B. Lortat-Jacob, Physiological and physical understanding of singing voice practices: the Sardinian Bassu case, AIRS Start-up meeting, Prince Edward Island, 2009
W. Cho, J. Hong, H. Park, Real-time ultrasonographic assessment of true vocal fold length in professional singers. J. Voice 26(6), 1ā6 (2012)
G. Troup, T. Griffiths, M. Schneider-Kolsky, T. Finlayson, Ultrasound observation of vowel tongue shapes in trained singers, in Proceedings of the 30th Condensed Matter and Materials Meeting, Wagga, 2006
T. Coduys, C. Henry, A. Cont, TOASTER and KROONDE: high-resolution and high-speed real-time sensor interfaces, in Proceedings of the Conference on New Interfaces for Musical Expression, Singapore, 2004, pp. 205ā206
F. Bevilacqua, B. Zamborlin, A. Sypniewski, N. Schnell, F. Guedy, N. Rasamimanana, Gesture in embodied communication and human-computer interaction, in 8th International Gesture Workshop, 2010, pp. 73ā84
M. Caon, Context-aware 3D gesture interaction based on multiple kinects, in Proceedings of the First International Conference on Ambient Computing, Applications, Services and Technologies, Barcelona, 2011, pp. 7ā12
M. Boucher, Virtual dance and motion-capture. Contemp. Aesthet. 9, 10 (2011)
R. Aylward, J.A. Paradiso, Sensemble: a wireless, compact, multi-user sensor system for interactive dance, in Proceedings of the International Conference on New Interfaces for Musical Expression (NIME06), Paris, Centre Pompidou, 2006, pp. 134ā139
D. Drobny, M. Weiss, J. Borchers, Saltate!: a sensor-based system to support dance beginners, Extended abstracts on Human factors in Computing Systems, in Proceedings of the CHI 09 International Conference (ACM, 2009, New York), pp. 3943ā3948
F. Bevilacqua, L. Naugle, C. Dobrian, Music control from 3D motion capture of dance. CHI 2001 for the NIME workshop (2001)
C. Dobrian, F. Bevilacqua, Gestural control of music: using the vicon 8 motion capture system, in Proceedings of the Conference on New Interfaces for Musical Expression (NIME), National University of Singapore, 2003, pp. 161ā163
M. Raptis, D. Kirovski, H. Hoppe, Real-time classification of dance gestures from skeleton animation, in Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, New York, 2011, pp. 147ā156
D.S. Alexiadis, P. Kelly, P. Daras, N.E. OāConnor, T. Boubekeur, M.B. Moussa, Evaluating a dancerās performance using kinect-based skeleton tracking, in Proceedings of the 19th ACM International Conference on Multimedia (New York, ACM, 2011), pp. 659ā662
S. Essid, D.S. Alexiadis, R. Tournemenne, M. Gowing, P. Kelly, D.S. Monaghan et al., An advanced virtual dance performance evaluator, in Proceedings of the 37th International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Kyoto, 2012, pp. 2269ā2272
G. Alankus, A.A. Bayazit, O.B. Bayazit, Automated motion synthesis for dancing characters: motion capture and retrieval. Comput. Anim. Virtual Worlds 16(3ā4), 259ā271 (2005)
M. Brand, A. Hertzmann, Style machines, in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 2000) (ACM Press, 2000), pp. 183ā192
D. Bouchard, N. Badler, Semantic segmentation of motion capture using laban movement analysis, in Proceedings of the 7th International Conference on Intelligent Virtual Agents, Springer, 2007. pp. 37ā44
K. Kahol, P. Tripathi, S. Panchanathan, Automated gesture segmentation from dance sequences, in Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition (FGR04), Seoul, 2004, pp. 883ā888
J. James, T. Ingalls, G. Qian, L. Olsen, D. Whiteley, S. Wong et al., Movement-based interactive dance performance, in Proceedings of the 14th annual ACM International Conference on Multimedia (ACM, New York, 2006), pp. 470ā480
A.-M. Burns, M.M. Wanderley, Visual methods for the retrieval of guitarist fingering, in Proceedings of the Conference on New Interfaces for Musical Expression (IRCAM-Centre, Pompidou, 2006), pp. 196ā199
Vision par ordinateur pour la reconnaissance des gestes musicaux des doigts, Revue Francophone dāInformatique Musicale [Online] Available at: http://revues.mshparisnord.org/rfim/index.php?id=107. Accessed 13 July 2013
D. Grunberg, Gesture Recognition for Conducting Computer Music (n.d.) [On line] Available at: http://schubert.ece.drexel.edu/research/gestureRecognition. Accessed 10 Jan 2009
J. Verner, MIDI guitar synthesis yesterday, today and tomorrow, an overview of the whole fingerpicking thing. Record. Mag. 8(9), 52ā57 (1995)
C. Traube, An interdisciplinary study of the timbre of the classical guitar, PhD Thesis, McGill University, 2004
Y. Takegawa, T. Terada, S. Nishio, Design and implementation of a real-time fingering detection system for piano performances, in Proceedings of the International Computer Music Conference, New Orleans, 2006, pp. 67ā74
J. MacRitchie, B. Buck, N. Bailey, Visualising musical structure through performance gesture, in Proceedings of the International Society for Music Information Retrieval Conference, Kobe, 2009, pp. 237ā242
M. Malempre, Pour une poignee de danses, Dapo Hainaut (ed.) (2010)
T. Calvert, W. Wilke, R. Ryman, I. Fox, Applications of computers to dance. IEEE Comput. Graph. Appl. 25(2), 6ā12 (2005)
Y. Shen, X. Wu, C. Lua, H. Cheng, National Dances Protection Based on Motion Capture Technology, Chengdu, Sichuan, vol. 51 (IACSIT Press, Singapore, 2012), pp. 78ā81
W.M. Brown, L. Cronk, K. Grochow, A. Jacobson, C.K. Liu, Z. Popovic et al., Dance reveals symmetry especially in young men. Nature 438(7071), 1148ā1150 (2005)
D. Tardieu, X. Siebert, B. Mazzarino, R. Chessini, J. Dubois, S. Dupont, G. Varni, A. Visentin, Browsing a dance video collection: dance analysis and interface design. J. Multimodal User Interf. 4(1), 37ā46 (2010)
J.C. Chan, H. Leung, J.K. Tang, T. Komura, A virtual reality dance training system using motion capture technology. IEEE Trans. Learn. Technol. 4(2), 187ā195 (2011)
I. Cohen, A. Garg, T. Huang, Emotion recognition from facial expression using multilevel HMM, in Proceedings of the Neural Information Processing Systems Workshop on Affective Computing, Breckenridge, 2000
F. Bourel, C. Chibelushi, A. Low, Robust facial expression recognition using a state-based model of spatially-localized facial dynamics, in Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, Washington, 2002
B. Schuller, S. Reiter, R. Mueller, A. Hames, G. Rigoll, Speaker independent speech emotion recognition by ensemble classification, in Proceedings of the IEEE International Conference on Multimedia and Expo, Amsterdam, 2005, pp. 864ā867
C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan, Analysis of emotional recognition using facial expressions, speech and multimodal information, in Proceedings of the International Conference on Multimodal Interfaces (ACM, New York, 2004), pp. 205ā211
R. Picard, E. Vyzas, J. Healey, Toward machine emotional intelligence: analysis of affective physiological state. IEEE Trans. Pattern Anal. Mach. Intell. 23(10), 1175ā1191 (2001)
F. Nasoz, C. Lisetti, K. Alvarez, N. Finkelstein, Emotion recognition from physiological signals for user modeling of affect, in Proceedings of the International Conference on User Modeling, Johnstown, 2003
C. Lisetti, F. Nasoz, Using non-invasive wearable computers to recognize human emotions from physiological signals. EURASIP J. Appl. Signal Process. 11, 1672ā1687 (2004)
D. McIntosh, A. Reichmann-Decker, P. Winkielman, J. Wilbarger, When the social mirror breaks: deficits in automatic, but not voluntary, mimicry of emotional facial expressions in autism. Dev. Sci. 9, 295ā302 (2006)
F. Esposito, D. Malerba, G. Semeraro, O. Altamura, S. Ferilli, T. Basile, M. Berard, M. Ceci, Machine learning methods for automatically processing historical documents: from paper acquisition to XML transformation, in Proceedings of the First International Workshop on Document Image Analysis for Libraries (DIAL, 04), Palo Alto, 2004, pp. 328ā335
A. Mallik, S. Chaudhuri, H. Ghosh, Nrityakosha: preserving the intangible heritage of Indian classical dance. ACM J. Comput. Cult. Herit. 4(3), 11 (2011)
M. Makridis, P. Daras, Automatic classification of archaeological pottery sherds. J. Comput. Cult. Herit. 5(4), 15 (2012)
A. Karasik, A complete, automatic procedure for pottery documentation and analysis, in Proceedings of the IEEE Computer Vision and Pattern Recognition Workshops (CVPRW), San Francisco, 2010, pp. 29ā34
S. Vrochidis, C. Doulaverakis, A. Gounaris, E. Nidelkou, L. Makris, I. Kompatsiaris, A hybrid ontology and visual-based retrieval model for cultural heritage multimedia collections. Int. J. Metadata Semant. Ontol. 3(3), 167ā182 (2008)
M. Liggins, D.L. Hall, J. Llina, Handbook of Multisensor Data Fusion, Theory and Practice, 2nd edn. (CRC Press, Boca Raton, 2008)
O. Punska, Bayesian approach to multisensor data fusion, PhD. Dissertation, Department of Engineering, University of Cambridge, 1999
S. Nikolopoulos, C. Lakka, I. Kompatsiaris, C. Varytimidis, K. Rapantzikos, Y. Avrithis, Compound document analysis by fusing evidence across media, in Proceedings of the International Workshop on Content-Based Multimedia Indexing, Chania, 2009, pp. 175ā180
S. Chang, D. Ellis, W. Jiang, K. Lee, A. Yanagawa, A.C. Loui, J. Luo, Largescale multimodal semantic concept detection for consumer video, in Proceedings of the International Workshop on Workshop on Multimedia Information Retrieval (MIR ā07), September, 2007, pp. 255ā264
R. Huber-Mƶrk, S. Zambanini, M. Zaharieva, M. Kampel, Identification of ancient coins based on fusion of shape and local features. Mach. Vision Appl. 22(6), 983ā994 (2011)
D. Datcu, L.J.M. Rothkrantz, Semantic Audio-Visual Data Fusion for Automatic Emotion Recognition (Euromedia, Porto, 2008)
M. Koolen, J. Kamps, Searching cultural heritage data: does structure help expert searchers?, in Proceedings of RIAO ā10 Adaptivity, Personalization and Fusion of Heterogeneous Information, Paris, 2010, pp. 152ā155
L. Bai, S. Lao, W. Zhang, G.J.F. Jones, A.F. Smeaton, Video semantic, content analysis framework based on ontology combined MPEG-7, in Adaptive Multimedia Retrieval: Retrieval, User, and Semantics, Lecture Notes in Computer Science, July, 2007, pp. 237ā250
S. Dasiopoulou, V. Mezaris, I. Kompatsiaris, V.K. Papastathis, G.M. Strintzis, Knowledge-assisted semantic video object detection. IEEE Trans. Circuits Syst. Video Technol. 15(10), 1210ā1224 (2005) (Special Issue on Analysis and Understanding for Video Adaptation)
J. Lien, T. Kanade, J. Cohn, C. Li, Automated facial expression recognition based on facs action units, in Proceedings of the 3rd IEEE Conference on Automatic Face and Gesture Recognition, Nara, 1998, pp. 390ā395
P. Mulholland, A. Wolff, T. Collins, Z. Zdrahal, An event-based approach to describing and understanding museum narratives, in Proceedings: Detection, Representation, and Exploitation of Events in the Semantic Web Workshop in Conjunction with the International Semantic Web Conference, Bonn, 2011
I. Kollia, V. Tzouvaras, N. Drosopoulos, G. Stamou, A systemic approach for effective semantic access to cultural content. Semant. Web ā Interoperability, Usability Appl. 3(1), 65ā83 (2012)
A. Gaitatzes, D. Christopoulos, M. Roussou, Reviving the past: cultural heritage meets virtual reality, in Proceedings of the 2001 Conference on Virtual Reality, Archeology, and Cultural Heritage, ACM, 2001, November, pp. 103ā110
M. Ott, F. Pozzi, Towards a new era for cultural heritage education: discussing the role of ICT. Comput. Hum. Behav. 27(4), 1365ā1371 (2011)
K.H. Veltman, Challenges for ICT/UCT applications in cultural heritage, in ICT and Heritage, ed. by C. Carreras (2005), online at http://www.uoc.edu/digithum/7/dt/eng/dossier.pdf
J.R. Savery, T.M. Duffy, Problem-based learning: an instructional model and its constructivist framework. Educ. Technol. 35, 31ā38 (1995)
M. Mortara, C.E. Catalano, F. Bellotti, G. Fiucci, M. Houry-Panchetti, P. Petridis, Learning cultural heritage by serious games. J. Cult. Herit. 15(3), 318ā325 (2014)
E.F. Anderson, L. McLoughlin, F. Liarokapis, C. Peters, P. Petridis, S. de Freitas, Serious games in cultural heritage, in Proceedings of the 10th International Symposium on Virtual Reality, Archaeology and Cultural Heritage VAST, ed. by M. Ashley, F. Liarokapis. State of the Art Reports (2009)
M. Ott, F. Pozzi, ICT and cultural heritage education: which added value? in Emerging Technologies and Information Systems for the Knowledge Society, ed. by Lytras et al. Lecture Notes in Computer Science, 5288 (Springer, Berlin, 2008), pp. 131ā138
X. Rodet, Y. Potard, J.-B. Barriere, The CHANT project: from the synthesis of the singing voice to synthesis in general. Comput. Music J. 8(3), 15ā31 (1984)
G. Berndtsson, The KTH rule system for singing synthesis. Comput. Music J. 20(1), 7691 (1996)
P. Cook, Physical models for music synthesis, and a meta-controller for real time performance, in Proceedings of the International Computer Music Conference and Festival, Delphi, 1992
P. Cook, Singing voice synthesis: history, current work, and future directions. Comput. Music J. 20(3), 3846 (1996)
G. Bennett, X. Rodet, Synthesis of the singing voice, in Current Directions in Computer Music Research, ed. by M.V. Mathews, J.R. Pierce (MIT Press, Cambridge, 1989), pp. 19ā44
H. Kenmochi, H. Ohshita, Vocaloidācommercial singing synthesizer based on sample concatenation. Presented at Interspeech 2007, Antwerp, 2007, pp. 4009ā40010
A. Kitsikidis, K. Dimitropoulos, S. Douka, N. Grammalidis, Dance analysis using multiple kinect sensors, in International Conference on Computer Vision Theory and Applications (VISAPP), IEEE, Vol. 2, 2014, January, pp. 789ā795
Acknowledgements
The research leading to these results has received funding from the European Communityās Seventh Framework Programme (FP7-ICT-2011-9) under grant agreement no FP7-ICT-600676 āi-Treasures: Intangible TreasuresāCapturing the Intangible Cultural Heritage and Learning the Rare Know-How of Living Human Treasuresā.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Alivizatou-Barakou, M. et al. (2017). Intangible Cultural Heritage and New Technologies: Challenges and Opportunities for Cultural Preservation and Development. In: Ioannides, M., Magnenat-Thalmann, N., Papagiannakis, G. (eds) Mixed Reality and Gamification for Cultural Heritage. Springer, Cham. https://doi.org/10.1007/978-3-319-49607-8_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-49607-8_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-49606-1
Online ISBN: 978-3-319-49607-8
eBook Packages: Computer ScienceComputer Science (R0)