Abstract
The perceived quality of synthetic speech strongly depends on its prosodic naturalness. Departing from earlier works by Mixdorff on a linguistically motivated model of German intonation based on the Fujisaki model, an integrated approach to predicting F0 along with syllable duration and energy was developed. The current paper first presents some statistical results concerning the relationship between linguistic and phonetic information underlying an utterance and its prosodic features. These results were employed for training the MFN-based integrated prosodic model predicting syllable duration and energy along with syllable-aligned Fujisaki control parameters. The paper then focusses on the method of perceptual evaluation developed, comparing resynthesis stimuli created by controlled prosodic degrading of natural speech with stimuli created using the integrated model. The results indicate that the integrated model generally receives better ratings than degraded stimuli with comparable durational and F0 deviations from the original. An important outcome is the observation that the accuracy of the predicted syllable durations appears to be a stronger factor with respect to the perceived quality than the accuracy of the predicted F0 contour.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Corrigan, G., Massey, N., and Karaaly, O. (1997). Generating segment durations in a text-to-speech system: A hybrid rulebased/neural network approach. Proc. Eurospeech'97. Rhodes, vol. 5, pp. 2675-2678.
Fujisaki, H. and Hirose, K. (1984). Analysis of voice fundamental frequency contours for declarative sentences of Japanese. Journal of the Acoustical Society of Japan (E), 5(4):233-241.
Hecht-Nielsen, R. (1990). Neurocomputing. Reading (Mass.): Addison-Wesley.
Hoffmann, R. (1999/II). A multilingual text-to-speech system. The Phonetician, 80:5-10.
Jokisch, O., Hirschfeld, D., Eichner, M., and Hoffmann, R. (1998). Multi-level rhythm control for speech synthesis using hybrid data driven and rule-based approaches. Proceedings of ICSLP'98. Sydney, pp. 607-610.
Jokisch, O., Mixdorff, H. et al. (2000). Learning the parameters of quantitative prosody models. Proceedings ICSLP 2000. Beijing, China, vol. 1, pp. 645-648.
Mixdorff, H. (1998). Intonation Patterns of German-Model-based. Quantitative Analysis and Synthesis of F0-Contours. PhD thesis TU Dresden (http://www.tfh-berlin.de/?mixdorff/thesis.htm).
Mixdorff, H. (2000). A novel approach to the fully automatic extraction of Fujisaki model parameters. Proceedings ICASSP 2000. Istanbul, Turkey, vol. 3, pp. 1281-1284.
Mixdorff, H. and Jokisch, O. (2001a). Building an integrated prosodic model of German. Proceedings of Eurospeech 2001. Aalborg, Denmark, vol. 2, pp. 947-950.
Mixdorff, H. and Jokisch, O. (2001b). Comparing a data-driven and a rule-based approach to predicting prosodic features of German. Tagungsband der 12. Konferenz Elektronische Sprachsignalverarbeitung. Bonn, Germany, pp. 298-305.
Mixdorff, H. and Mehnert, D. (1999). Exploring the naturalness of several German high-quality-text-to-speech systems. Proceedings of Eurospeech '99. Budapest, Hungary, vol. 4, pp. 1859-1862.
Rapp, S. (1998). Automatisierte Erstellung von Korpora für die Prosodieforschung. PhD thesis Universit¨at Stuttgart, Institut für Maschinelle Sprachverarbeitung.
Sonntag, G.P. and Portele, T. (1998). PURR-a method for prosody evaluation and investigation. Journal of Computer Speech and Language, 12(4): 437-451. Special issue on evaluation in language and speech technology.
Stöber, K., Portele, T., Wagner, P., and Hess, W. (1999). Synthesis by word concatenation. Proceedings of EUROSPEECH '99. Budapest, vol. 2, pp. 619-622.
Zell, A. (1994). Simulation Neuronaler Netze, Bonn, Addison-Wesley.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Mixdorff, H., Jokisch, O. Evaluating the Quality of an Integrated Model of German Prosody. International Journal of Speech Technology 6, 45–55 (2003). https://doi.org/10.1023/A:1021099922328
Issue Date:
DOI: https://doi.org/10.1023/A:1021099922328