Search: (LAR1:gu) pers:(Allwood Jens 1947) mspu:(conferencepaper) >
Contributions of di...
Contributions of different modalities to the attribution of affective-epistemic states
-
- Allwood, Jens, 1947 (author)
- Gothenburg University,Göteborgs universitet,Kollegium SSKKII (2010-),Institutionen för tillämpad informationsteknologi (GU),Centre of Interdisciplinary Research/Cognition/Information. SSKKII (2010-),Department of Applied Information Technology (GU)
-
- Lanzini, Stefano (author)
- Gothenburg University,Göteborgs universitet,Kollegium SSKKII (2010-),Institutionen för tillämpad informationsteknologi (GU),Centre of Interdisciplinary Research/Cognition/Information. SSKKII (2010-),Department of Applied Information Technology (GU)
-
- Ahlsén, Elisabeth, 1951 (author)
- Gothenburg University,Göteborgs universitet,Institutionen för tillämpad informationsteknologi (GU),Kollegium SSKKII (2010-),Department of Applied Information Technology (GU),Centre of Interdisciplinary Research/Cognition/Information. SSKKII (2010-)
-
(creator_code:org_t)
- 2014
- 2014
- English.
-
In: Proceedings from the 1st European Symposium on Multimodal Communication, University of Malta, Valletta, October 17-18, 2013, NEALT Proceedings Series, Linköping Electronic Conference Proceedings. - 1650-3686 .- 1650-3740. - 9789175192666 ; :101, s. 1-6
- Related links:
-
https://gup.ub.gu.se...
Abstract
Subject headings
Close
- The focus of this study is the relation between multimodal and unimodal perception of emo-tions and attitudes. A point of departure for the study is the claim that multimodal presentation increases redundancy and often thereby also the correctness of interpretation. A study was carried out in order to investigate this claim by examining the relative role of unimodal versus multimodal visual and auditory perception for interpreting affective-epistemic states (AES). The abbreviation AES will be used both for the singular form “affective-epistemic state” and the plural form “affective-epistemic states”. Clips from video-recorded dyadic in-teractions were presented to 12 subjects using three types of presentation, Audio only, Video only and Audio+Video. The task was to inter-pret the affective-epistemic states of one of the two persons in the clip. The results indicated differences concerning the role of different sensory modalities for different affective-epistemic states. In some cases there was a “filtering” effect, rendering fewer interpreta-tions in a multimodal presentation than in a unimodal one for a specific AES. This oc-curred for happiness, disinterest and under-standing, whereas “mutual reinforcement”, rendering more interpretations for multimodal presentation than for unimodal video or audio presentation, occurred for nervousness, inter-est and thoughtfulness. Finally, for one AES, confidence, audio and video seem to have mu-tually restrictive roles.
Subject headings
- HUMANIORA -- Språk och litteratur (hsv//swe)
- HUMANITIES -- Languages and Literature (hsv//eng)
Publication and Content Type
- ref (subject category)
- kon (subject category)
Find in a library
To the university's database