SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Anikin Andrey) srt2:(2021)"

Sökning: WFRF:(Anikin Andrey) > (2021)

  • Resultat 1-5 av 5
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Amorim, Maria, et al. (författare)
  • Changes in Vocal Emotion Recognition Across the Life Span
  • 2021
  • Ingår i: Emotion. - : American Psychological Association (APA). - 1528-3542 .- 1931-1516. ; 21:2, s. 315-325
  • Tidskriftsartikel (refereegranskat)abstract
    • The ability to recognize emotions undergoes major developmental changes from infancy to adolescence, peaking in early adulthood, and declining with aging. A life span approach to emotion recognition is lacking in the auditory domain, and it remains unclear how the speaker’s and listener’s ages interact in the context of decoding vocal emotions. Here, we examined age-related differences in vocal emotion recognition from childhood until older adulthood and tested for a potential own-age bias in performance. A total of 164 participants (36 children [7–11 years], 53 adolescents [12–17 years], 48 young adults [20 –30 years], 27 older adults [58 – 82 years]) completed a forced-choice emotion categorization task with nonverbal vocalizations expressing pleasure, relief, achievement, happiness, sadness, disgust, anger, fear, surprise, and neutrality. These vocalizations were produced by 16 speakers, 4 from each age group (children [8 –11 years], adolescents [14 –16 years], young adults [19 –23 years], older adults [60 –75 years]). Accuracy in vocal emotion recognition improved from childhood to early adulthood and declined in older adults. Moreover, patterns of improvement and decline differed by emotion category: faster development for pleasure, relief, sadness, and surprise and delayed decline for fear and surprise. Vocal emotions produced by older adults were more difficult to recognize when compared to all other age groups. No evidence for an own-age bias was found, except in children. These findings support effects of both speaker and listener ages on how vocal emotions are decoded and inform current models of vocal emotion perception.
  •  
2.
  • Anikin, Andrey, et al. (författare)
  • Harsh is large : Nonlinear vocal phenomena lower voice pitch and exaggerate body size
  • 2021
  • Ingår i: Royal Society of London. Proceedings B. Biological Sciences. - : The Royal Society. - 1471-2954. ; 288:1954
  • Tidskriftsartikel (refereegranskat)abstract
    • A lion’s roar, a dog’s bark, an angry yell in a pub brawl: what do these voca-lizations have in common? They all sound harsh due to nonlinear vocal phenomena (NLP)—deviations from regular voice production, hypothesized to lower perceived voice pitch and thereby exaggerate the apparent bodysize of the vocalizer. To test this yet uncorroborated hypothesis, we synthesized human nonverbal vocalizations, such as roars, groans and screams, with and without NLP (amplitude modulation, subharmonics and chaos).We then measured their effects on nearly 700 listeners’ perceptions of three psychoacoustic (pitch, timbre, roughness) and three ecological (body size, for-midability, aggression) characteristics. In an explicit rating task, all NLP lowered perceived voice pitch, increased voice darkness and roughness, and caused vocalizers to sound larger, more formidable and more aggressive. Key results were replicated in an implicit associations test, suggesting that the‘harsh is large’ bias will arise in ecologically relevant confrontational contexts that involve a rapid, and largely implicit, evaluation of the opponent’s size. In sum, nonlinearities in human vocalizations can flexibly communicate both formidability and intention to attack, suggesting they are not a mere byproduct of loud vocalizing, but rather an informative acoustic signal wellsuited for intimidating potential opponents.
  •  
3.
  • Lima, Cesar, et al. (författare)
  • Authentic and posed emotional vocalizations trigger distinct facial responses
  • 2021
  • Ingår i: Cortex. - : Elsevier BV. - 1973-8102 .- 0010-9452. ; 141, s. 280-292
  • Tidskriftsartikel (refereegranskat)abstract
    • The ability to recognize the emotions of others is a crucial skill. In the visual modality, sensorimotor mechanisms provide an important route for emotion recognition. Perceiving facial expressions often evokes activity in facial muscles and in motor and somatosensory systems, and this activity relates to performance in emotion tasks. It remains unclear whether and how similar mechanisms extend to audition. Here we examined facial electromyographic and electrodermal responses to nonverbal vocalizations that varied in emotional authenticity. Participants (N = 100) passively listened to laughs and cries that could reflect an authentic or a posed emotion. Bayesian mixed models indicated that listening to laughter evoked stronger facial responses than listening to crying. These responses were sensitive to emotional authenticity. Authentic laughs evoked more activity than posed laughs in the zygomaticus and orbicularis, muscles typically associated with positive affect. We also found that activity in the orbicularis and corrugator related to subjective evaluations in a subsequent authenticity perception task. Stronger responses in the orbicularis predicted higher perceived laughter authenticity. Stronger responses in the corrugator, a muscle associated with negative affect, predicted lower perceived laughter authenticity. Moreover, authentic laughs elicited stronger skin conductance responses than posed laughs. This arousal effect did not predict task performance, however. For crying, physiological responses were not associated with authenticity judgments. Altogether, these findings indicate that emotional authenticity affects peripheral nervous system responses to vocalizations. They also point to a role of sensorimotor mechanisms in the evaluation of authenticity in the auditory modality.
  •  
4.
  • Pinheiro, Ana, et al. (författare)
  • Emotional authenticity modulates affective and social trait inferences from voices
  • 2021
  • Ingår i: Philosophical Transactions of the Royal Society B: Biological Sciences. - : The Royal Society. - 1471-2970 .- 0962-8436. ; 376:1840
  • Tidskriftsartikel (refereegranskat)abstract
    • The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they can detect authenticity in other vocalizations, and whether authenticity determines the affective and social impressions that we form about others. Here, 137 participants listened to laughs and cries that could be spontaneous or volitional and rated them on authenticity, valence, arousal, trustworthiness and dominance. Bayesian mixed models indicated that listeners detect authenticity similarly well in laughter and crying. Speakers were also perceived to be more trustworthy, and in a higher arousal state, when their laughs and cries were spontaneous. Moreover, spontaneous laughs were evaluated as more positive than volitional ones, and we found that the same acoustic features predicted perceived authenticity and trustworthiness in laughter: high pitch, spectral variability and less voicing. For crying, associations between acoustic features and ratings were less reliable. These findings indicate that emotional authenticity shapes affective and social trait inferences from voices, and that the ability to detect authenticity in vocalizations is not limited to laughter.This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’.
  •  
5.
  • Pisanski, Katarzyna, et al. (författare)
  • Did vocal size exaggeration contribute to the origins of vocalic complexity?
  • 2021
  • Ingår i: Philosophical Transactions of the Royal Society B: Biological Sciences. - : The Royal Society. - 1471-2970 .- 0962-8436. ; 377:1841
  • Tidskriftsartikel (refereegranskat)abstract
    • Vocal tract elongation, which uniformly lowers vocal tract resonances (formant frequencies) in animal vocalizations, has evolved independently in several vertebrate groups as a means for vocalizers to exaggerate their apparent body size. Here, we propose that smaller speech-like articulatory movements that alter only individual formants can serve a similar yet less energetically costly size-exaggerating function. To test this, we examine whether uneven formant spacing alters the perceived body size of vocalizers in synthesized human vowels and animal calls. Among six synthetic vowel patterns, those characterized by the lowest first and second formant (the vowel /u/ as in ‘boot’) are consistently perceived as produced by the largest vocalizer. Crucially, lowering only one or two formants in animal-like calls also conveys the impression of a larger body size, and lowering the second and third formants simultaneously exaggerates perceived size to a similar extent as rescaling all formants. As the articulatory movements required for individual formant shifts are minor compared to full vocal tract extension, they represent a rapid and energetically efficient mechanism for acoustic size exaggeration. We suggest that, by favouring the evolution of uneven formant patterns in vocal communication, this deceptive strategy may have contributed to the origins of the phonemic diversification required for articulated speech.This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part II)’.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-5 av 5

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy