SpringerOpen Newsletter

Receive periodic news and updates relating to SpringerOpen.

Open Access Research Article

Visual Contribution to Speech Perception: Measuring the Intelligibility of Animated Talking Heads

Slim Ouni1*, Michael M Cohen2, Hope Ishak2 and Dominic W Massaro2

Author Affiliations

1 LORIA, Campus Scientifique, BP 239, Vandoeure lès Nancy Cedex 54506, France

2 Perceptual Science Laboratory, University of California, Santa Cruz, CA 95064, USA

For all author emails, please log on.

EURASIP Journal on Audio, Speech, and Music Processing 2007, 2007:047891  doi:10.1155/2007/47891

Published: 23 October 2006

Abstract

Animated agents are becoming increasingly frequent in research and applications in speech science. An important challenge is to evaluate the effectiveness of the agent in terms of the intelligibility of its visible speech. In three experiments, we extend and test the Sumby and Pollack (1954) metric to allow the comparison of an agent relative to a standard or reference, and also propose a new metric based on the fuzzy logical model of perception (FLMP) to describe the benefit provided by a synthetic animated face relative to the benefit provided by a natural face. A valid metric would allow direct comparisons accross different experiments and would give measures of the benfit of a synthetic animated face relative to a natural face (or indeed any two conditions) and how this benefit varies as a function of the type of synthetic face, the test items (e.g., syllables versus sentences), different individuals, and applications.