To the top

Page Manager: Webmaster
Last update: 9/11/2012 3:13 PM

Tell a friend about this page
Print version

Contributions of differen… - University of Gothenburg, Sweden Till startsida
Sitemap
To content Read more about how we use cookies on gu.se

Contributions of different modalities to the attribution of affective-epistemic states

Conference paper
Authors Jens Allwood
Stefano Lanzini
Elisabeth Ahlsén
Published in Proceedings from the 1st European Symposium on Multimodal Communication, University of Malta, Valletta, October 17-18, 2013, NEALT Proceedings Series, Linköping Electronic Conference Proceedings
Issue 101
Pages 1-6
ISBN 978-91-7519-266-6
ISSN 1650-3686
Publication year 2014
Published at Centre of Interdisciplinary Research/Cognition/Information. SSKKII (2010-)
Department of Applied Information Technology (GU)
Pages 1-6
Language en
Links www.ep.liu.se/ecp/101/001/ecp131010...
Subject categories Languages and Literature

Abstract

The focus of this study is the relation between multimodal and unimodal perception of emo-tions and attitudes. A point of departure for the study is the claim that multimodal presentation increases redundancy and often thereby also the correctness of interpretation. A study was carried out in order to investigate this claim by examining the relative role of unimodal versus multimodal visual and auditory perception for interpreting affective-epistemic states (AES). The abbreviation AES will be used both for the singular form “affective-epistemic state” and the plural form “affective-epistemic states”. Clips from video-recorded dyadic in-teractions were presented to 12 subjects using three types of presentation, Audio only, Video only and Audio+Video. The task was to inter-pret the affective-epistemic states of one of the two persons in the clip. The results indicated differences concerning the role of different sensory modalities for different affective-epistemic states. In some cases there was a “filtering” effect, rendering fewer interpreta-tions in a multimodal presentation than in a unimodal one for a specific AES. This oc-curred for happiness, disinterest and under-standing, whereas “mutual reinforcement”, rendering more interpretations for multimodal presentation than for unimodal video or audio presentation, occurred for nervousness, inter-est and thoughtfulness. Finally, for one AES, confidence, audio and video seem to have mu-tually restrictive roles.

Page Manager: Webmaster|Last update: 9/11/2012
Share:

The University of Gothenburg uses cookies to provide you with the best possible user experience. By continuing on this website, you approve of our use of cookies.  What are cookies?