SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Anikin Andrey) srt2:(2018)"

Sökning: WFRF:(Anikin Andrey) > (2018)

  • Resultat 1-4 av 4
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Anikin, Andrey, et al. (författare)
  • Human Non-linguistic Vocal Repertoire : Call Types and Their Meaning
  • 2018
  • Ingår i: Journal of Nonverbal Behavior. - : Springer Science and Business Media LLC. - 1573-3653 .- 0191-5886. ; 42:1, s. 53-80
  • Tidskriftsartikel (refereegranskat)abstract
    • Recent research on human nonverbal vocalizations has led to considerable progress in our understanding of vocal communication of emotion. However, in contrast to studies of animal vocalizations, this research has focused mainly on the emotional interpretation of such signals. The repertoire of human nonverbal vocalizations as acoustic types, and the mapping between acoustic and emotional categories, thus remain underexplored. In a cross-linguistic naming task (Experiment 1), verbal categorization of 132 authentic (non-acted) human vocalizations by English-, Swedish- and Russian-speaking participants revealed the same major acoustic types: laugh, cry, scream, moan, and possibly roar and sigh. The association between call type and perceived emotion was systematic but non-redundant: listeners associated every call type with a limited, but in some cases relatively wide, range of emotions. The speed and consistency of naming the call type predicted the speed and consistency of inferring the caller's emotion, suggesting that acoustic and emotional categorizations are closely related. However, participants preferred to name the call type before naming the emotion. Furthermore, nonverbal categorization of the same stimuli in a triad classification task (Experiment 2) was more compatible with classification by call type than by emotion, indicating the former's greater perceptual salience. These results suggest that acoustic categorization may precede attribution of emotion, highlighting the need to distinguish between the overt form of nonverbal signals and their interpretation by the perceiver. Both within- and between-call acoustic variation can then be modeled explicitly, bringing research on human nonverbal vocalizations more in line with the work on animal communication.
  •  
2.
  • Anikin, Andrey, et al. (författare)
  • Perceptual and acoustic differences between authentic and acted nonverbal emotional vocalizations
  • 2018
  • Ingår i: Quarterly Journal of Experimental Psychology. - : SAGE Publications. - 1747-0218 .- 1747-0226. ; 71:3, s. 622-641
  • Tidskriftsartikel (refereegranskat)abstract
    • Most research on nonverbal emotional vocalizations is based on actor portrayals, but how similar are they to the vocalizations produced spontaneously in everyday life? Perceptual and acoustic differences have been discovered between spontaneous and volitional laughs, but little is known about other emotions. We compared 362 acted vocalizations from seven corpora with 427 authentic vocalizations using acoustic analysis, and 278 vocalizations (139 authentic and 139 acted) were also tested in a forced-choice authenticity detection task (N = 154 listeners). Target emotions were: achievement, amusement, anger, disgust, fear, pain, pleasure, and sadness. Listeners distinguished between authentic and acted vocalizations with accuracy levels above chance across all emotions (overall accuracy 65%). Accuracy was highest for vocalizations of achievement, anger, fear, and pleasure, which also displayed the largest differences in acoustic characteristics. In contrast, both perceptual and acoustic differences between authentic and acted vocalizations of amusement, disgust, and sadness were relatively small. Acoustic predictors of authenticity included higher and more variable pitch, lower harmonicity, and less regular temporal structure. The existence of perceptual and acoustic differences between authentic and acted vocalizations for all analysed emotions suggests that it may be useful to include spontaneous expressions in datasets for psychological research and affective computing.
  •  
3.
  • Anikin, Andrey, et al. (författare)
  • Synesthetic Associations Between Voice and Gestures in Preverbal Infants : Weak Effects and Methodological Concerns
  • 2018
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Adult humans spontaneously associate visual features, such as size and direction of movement, with phonetic properties like vowel quality and auditory pitch. A number of recent studies have claimed that looking time in preverbal infants reveals the same associations, which would indicate that some cross-modal correspondences are the result of perceptual biases. Here we tested 30 infants of age 7-13 months, who were exposed to pairs of audiovisual stimuli presented first sequentially and then side by side. The stimuli consisted of a visual object (computer-animated ball or filmed human hand) moving sinusoidally, vertically, or in a U-shape and accompanied by a sliding voice-like tone. Sequential presentation revealed no preference for either audiovisual synchrony or synesthetic congruency, while side-by-side presentation revealed a small preference for incongruent stimuli. The effect of congruency was similar for the animated ball and filmed human hand. These findings extend the results of previous research on pitch-motion synesthesia in preverbal infants, which used animations and sliding whistles, to more ecologically relevant stimuli such as voice and gestures. If infants and adults share the same preferences for non-arbitrary mappings between manual gestures and intonation, this could indicate that cross-modal correspondences facilitate language acquisition. On the other hand, a critical survey of the field revealed that previous studies of audiovisual cross-modal correspondences in infants suffer from replication failures due to poor robustness of the reported effect with respect to experimental stimuli and testing procedure. We therefore argue that the research on cross-modal correspondences in infants would profit from using alternative testing methods in addition to preferential looking and call for replication of previously reported congruency effects.
  •  
4.
  • Oliva, Manuel, et al. (författare)
  • Pupil dilation reflects the time course of emotion recognition in human vocalizations
  • 2018
  • Ingår i: Scientific Reports. - : Springer Science and Business Media LLC. - 2045-2322. ; 8:1
  • Tidskriftsartikel (refereegranskat)abstract
    • The processing of emotional signals usually causes an increase in pupil size, and this effect has been largely attributed to autonomic arousal prompted by the stimuli. Additionally, changes in pupil size were associated with decision making during non-emotional perceptual tasks. Therefore, in this study we investigated the relationship between pupil size fluctuations and the process of emotion recognition. Participants heard human nonverbal vocalizations (e.g., laughing, crying) and indicated the emotional state of the speakers as soon as they had identified it. The results showed that during emotion recognition, the time course of pupil response was driven by the decision-making process. In particular, peak pupil dilation betrayed the time of emotional selection. In addition, pupil response revealed properties of the decisions, such as the perceived emotional valence and the confidence in the assessment. Because pupil dilation (under isoluminance conditions) is almost exclusively promoted by norepinephrine (NE) release from the locus coeruleus (LC), the results suggest an important role of the LC-NE system during emotion processing.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-4 av 4

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy