SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Anikin Andrey) "

Sökning: WFRF:(Anikin Andrey)

  • Resultat 1-10 av 40
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Amorim, Maria, et al. (författare)
  • Changes in Vocal Emotion Recognition Across the Life Span
  • 2021
  • Ingår i: Emotion. - : American Psychological Association (APA). - 1528-3542 .- 1931-1516. ; 21:2, s. 315-325
  • Tidskriftsartikel (refereegranskat)abstract
    • The ability to recognize emotions undergoes major developmental changes from infancy to adolescence, peaking in early adulthood, and declining with aging. A life span approach to emotion recognition is lacking in the auditory domain, and it remains unclear how the speaker’s and listener’s ages interact in the context of decoding vocal emotions. Here, we examined age-related differences in vocal emotion recognition from childhood until older adulthood and tested for a potential own-age bias in performance. A total of 164 participants (36 children [7–11 years], 53 adolescents [12–17 years], 48 young adults [20 –30 years], 27 older adults [58 – 82 years]) completed a forced-choice emotion categorization task with nonverbal vocalizations expressing pleasure, relief, achievement, happiness, sadness, disgust, anger, fear, surprise, and neutrality. These vocalizations were produced by 16 speakers, 4 from each age group (children [8 –11 years], adolescents [14 –16 years], young adults [19 –23 years], older adults [60 –75 years]). Accuracy in vocal emotion recognition improved from childhood to early adulthood and declined in older adults. Moreover, patterns of improvement and decline differed by emotion category: faster development for pleasure, relief, sadness, and surprise and delayed decline for fear and surprise. Vocal emotions produced by older adults were more difficult to recognize when compared to all other age groups. No evidence for an own-age bias was found, except in children. These findings support effects of both speaker and listener ages on how vocal emotions are decoded and inform current models of vocal emotion perception.
  •  
2.
  • Anikin, Andrey (författare)
  • A moan of pleasure should be breathy : The effect of voice quality on the meaning of human nonverbal vocalizations
  • 2020
  • Ingår i: Phonetica. - : Walter de Gruyter GmbH. - 0031-8388 .- 1423-0321. ; 77:5, s. 327-349
  • Tidskriftsartikel (refereegranskat)abstract
    • Prosodic features, such as intonation and voice intensity, have a well-documented role in communicating emotion, but less is known about the role of laryngeal voice quality in speech and particularly in nonverbal vocalizations such as laughs and moans. Potentially, however, variations in voice quality between tense and breathy may convey rich information about the speaker’s physiological and affective state. In this study breathiness was manipulated in synthetic human nonverbal vocalizations by adjusting the relative strength of upper harmonics and aspiration noise. In experiment 1 (28 prototypes × 3 manipulations = 84 sounds), otherwise identical vocalizations with tense versus breathy voice quality were associated with higher arousal (general alertness), higher dominance, and lower valence (unpleasant states). Ratings on discrete emotions in experiment 2 (56 × 3 = 168 sounds) confirmed that breathiness was reliably associated with positive emotions, particularly in ambiguous vocalizations (gasps and moans). The spectral centroid did not fully account for the effect of manipulation, confirming that the perceived change in voice quality was more specific than a general shift in timbral brightness. Breathiness is thus involved in communicating emotion with nonverbal vocalizations, possibly due to changes in low-level auditory salience and perceived vocal effort.
  •  
3.
  • Anikin, Andrey, et al. (författare)
  • A practical guide to calculating vocal tract length and scale-invariant formant patterns
  • 2023
  • Ingår i: Behavior Research Methods. - 1554-3528.
  • Tidskriftsartikel (refereegranskat)abstract
    • Formants (vocal tract resonances) are increasingly analyzed not only by phoneticians in speech but also by behavioral scientists studying diverse phenomena such as acoustic size exaggeration and articulatory abilities of non-human animals. This often involves estimating vocal tract length acoustically and producing scale-invariant representations of formant patterns. We present a theoretical framework and practical tools for carrying out this work, including open-source software solutions included in R packages soundgen and phonTools. Automatic formant measurement with linear predictive coding is error-prone, but formant_app provides an integrated environment for formant annotation and correction with visual and auditory feedback. Once measured, formants can be normalized using a single recording (intrinsic methods) or multiple recordings from the same individual (extrinsic methods). Intrinsic speaker normalization can be as simple as taking formant ratios and calculating the geometric mean as a measure of overall scale. The regression method implemented in the function estimateVTL calculates the apparent vocal tract length assuming a single-tube model, while its residuals provide a scale-invariant vowel space based on how far each formant deviates from equal spacing (the schwa function). Extrinsic speaker normalization provides more accurate estimates of speaker- and vowel-specific scale factors by pooling information across recordings with simple averaging or mixed models, which we illustrate with example datasets and R code. The take-home messages are to record several calls or vowels per individual, measure at least three or four formants, check formant measurements manually, treat uncertain values as missing, and use the statistical tools best suited to each modeling context.
  •  
4.
  • Anikin, Andrey, et al. (författare)
  • Beyond speech : exploring diversity in the human voice
  • 2023
  • Ingår i: iScience. - 2589-0042. ; 26:11
  • Tidskriftsartikel (refereegranskat)abstract
    • Humans have evolved voluntary control over vocal production for speaking and singing, while preserving the phylogenetically older system of spontaneous nonverbal vocalizations such as laughs and screams. To test for systematic acoustic differences between these vocal domains, we analyzed a broad, cross-cultural corpus representing over 2 h of speech, singing, and nonverbal vocalizations. We show that, while speech is relatively low-pitched and tonal with mostly regular phonation, singing and especially nonverbal vocalizations vary enormously in pitch and often display harsh-sounding, irregular phonation owing to nonlinear phenomena. The evolution of complex supralaryngeal articulatory spectro-temporal modulation has been critical for speech, yet has not significantly constrained laryngeal source modulation. In contrast, articulation is very limited in nonverbal vocalizations, which predominantly contain minimally articulated open vowels and rapid temporal modulation in the roughness range. We infer that vocal source modulation works best for conveying affect, while vocal filter modulation mainly facilitates semantic communication.
  •  
5.
  • Anikin, Andrey, et al. (författare)
  • Compensation for a large gesture-speech asynchrony in instructional videos
  • 2015
  • Ingår i: Gesture and Speech in Interaction - 4th edition (GESPIN 4). ; , s. 19-23
  • Konferensbidrag (refereegranskat)abstract
    • We investigated the pragmatic effects of gesture-speech lag by asking participants to reconstruct formations of geometric shapes based on instructional films in four conditions: sync, video or audio lag (±1,500 ms), audio only. All three video groups rated the task as less difficult compared to the audio-only group and performed better. The scores were slightly lower when sound preceded gestures (video lag), but not when gestures preceded sound (audio lag). Participants thus compensated for delays of 1.5 seconds in either direction, apparently without making a conscious effort. This greatly exceeds the previously reported time window for automatic multimodal integration.
  •  
6.
  • Anikin, Andrey, et al. (författare)
  • Do some languages sound more beautiful than others?
  • 2023
  • Ingår i: Proceedings of the National Academy of Sciences. - 1091-6490. ; 120:17
  • Tidskriftsartikel (refereegranskat)abstract
    • Italian is sexy, German is rough—but how about Páez or Tamil? Are there universal phonesthetic judgments based purely on the sound of a language, or are preferences attributable to language-external factors such as familiarity and cultural stereotypes? We collected 2,125 recordings of 228 languages from 43 language families, including 5 to 11 speakers of each language to control for personal vocal attractiveness, and asked 820 native speakers of English, Chinese, or Semitic languages to indicate how much they liked these languages. We found a strong preference for languages perceived as familiar, even when they were misidentified, a variety of cultural-geographical biases, and a preference for breathy female voices. The scores by English, Chinese, and Semitic speakers were weakly correlated, indicating some cross-cultural concordance in phonesthetic judgments, but overall there was little consensus between raters about which languages sounded more beautiful, and average scores per language remained within ±2% after accounting for confounds related to familiarity and voice quality of individual speakers. None of the tested phonetic features—the presence of specific phonemic classes, the overall size of phonetic repertoire, its typicality and similarity to the listener’s first language—were robust predictors of pleasantness ratings, apart from a possible slight preference for nontonal languages. While population-level phonesthetic preferences may exist, their contribution to perceptual judgments of short speech recordings appears to be minor compared to purely personal preferences, the speaker’s voice quality, and perceived resemblance to other languages culturally branded as beautiful or ugly.
  •  
7.
  • Anikin, Andrey, et al. (författare)
  • Harsh is large : Nonlinear vocal phenomena lower voice pitch and exaggerate body size
  • 2021
  • Ingår i: Royal Society of London. Proceedings B. Biological Sciences. - : The Royal Society. - 1471-2954. ; 288:1954
  • Tidskriftsartikel (refereegranskat)abstract
    • A lion’s roar, a dog’s bark, an angry yell in a pub brawl: what do these voca-lizations have in common? They all sound harsh due to nonlinear vocal phenomena (NLP)—deviations from regular voice production, hypothesized to lower perceived voice pitch and thereby exaggerate the apparent bodysize of the vocalizer. To test this yet uncorroborated hypothesis, we synthesized human nonverbal vocalizations, such as roars, groans and screams, with and without NLP (amplitude modulation, subharmonics and chaos).We then measured their effects on nearly 700 listeners’ perceptions of three psychoacoustic (pitch, timbre, roughness) and three ecological (body size, for-midability, aggression) characteristics. In an explicit rating task, all NLP lowered perceived voice pitch, increased voice darkness and roughness, and caused vocalizers to sound larger, more formidable and more aggressive. Key results were replicated in an implicit associations test, suggesting that the‘harsh is large’ bias will arise in ecologically relevant confrontational contexts that involve a rapid, and largely implicit, evaluation of the opponent’s size. In sum, nonlinearities in human vocalizations can flexibly communicate both formidability and intention to attack, suggesting they are not a mere byproduct of loud vocalizing, but rather an informative acoustic signal wellsuited for intimidating potential opponents.
  •  
8.
  • Anikin, Andrey, et al. (författare)
  • Human Non-linguistic Vocal Repertoire : Call Types and Their Meaning
  • 2018
  • Ingår i: Journal of Nonverbal Behavior. - : Springer Science and Business Media LLC. - 1573-3653 .- 0191-5886. ; 42:1, s. 53-80
  • Tidskriftsartikel (refereegranskat)abstract
    • Recent research on human nonverbal vocalizations has led to considerable progress in our understanding of vocal communication of emotion. However, in contrast to studies of animal vocalizations, this research has focused mainly on the emotional interpretation of such signals. The repertoire of human nonverbal vocalizations as acoustic types, and the mapping between acoustic and emotional categories, thus remain underexplored. In a cross-linguistic naming task (Experiment 1), verbal categorization of 132 authentic (non-acted) human vocalizations by English-, Swedish- and Russian-speaking participants revealed the same major acoustic types: laugh, cry, scream, moan, and possibly roar and sigh. The association between call type and perceived emotion was systematic but non-redundant: listeners associated every call type with a limited, but in some cases relatively wide, range of emotions. The speed and consistency of naming the call type predicted the speed and consistency of inferring the caller's emotion, suggesting that acoustic and emotional categorizations are closely related. However, participants preferred to name the call type before naming the emotion. Furthermore, nonverbal categorization of the same stimuli in a triad classification task (Experiment 2) was more compatible with classification by call type than by emotion, indicating the former's greater perceptual salience. These results suggest that acoustic categorization may precede attribution of emotion, highlighting the need to distinguish between the overt form of nonverbal signals and their interpretation by the perceiver. Both within- and between-call acoustic variation can then be modeled explicitly, bringing research on human nonverbal vocalizations more in line with the work on animal communication.
  •  
9.
  •  
10.
  • Anikin, Andrey, et al. (författare)
  • Implicit associations between individual properties of color and sound
  • 2019
  • Ingår i: Attention, Perception & Psychophysics. - : Springer Science and Business Media LLC. - 1943-3921 .- 1943-393X. ; 81:3, s. 764-777
  • Tidskriftsartikel (refereegranskat)abstract
    • We report a series of 22 experiments in which the implicit associations test (IAT) was used to investigate cross-modal correspondences between visual (luminance, hue [R-G, B-Y], saturation) and acoustic (loudness, pitch, formants [F1, F2], spectral centroid, trill) dimensions. Colors were sampled from the perceptually accurate CIE-Lab space, and the complex, vowel-like sounds were created with a formant synthesizer capable of separately manipulating individual acoustic properties. In line with previous reports, the loudness and pitch of acoustic stimuli were associated with both luminance and saturation of the presented colors. However, pitch was associated specifically with color lightness, whereas loudness mapped onto greater visual saliency. Manipulating the spectrum of sounds without modifying their pitch showed that an upward shift of spectral energy was associated with the same visual features (higher luminance and saturation) as higher pitch. In contrast, changing formant frequencies of synthetic vowels while minimizing the accompanying shifts in spectral centroid failed to reveal cross-modal correspondences with color. This may indicate that the commonly reported associations between vowels and colors are mediated by differences in the overall balance of low- and high-frequency energy in the spectrum rather than by vowel identity as such. Surprisingly, the hue of colors with the same luminance and saturation was not associated with any of the tested acoustic features, except for a weak preference to match higher pitch with blue (vs. yellow). We discuss these findings in the context of previous research and consider their implications for sound symbolism in world languages.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 40
Typ av publikation
tidskriftsartikel (34)
annan publikation (3)
konferensbidrag (2)
doktorsavhandling (1)
Typ av innehåll
refereegranskat (36)
övrigt vetenskapligt/konstnärligt (4)
Författare/redaktör
Persson, Tomas (3)
Gärdenfors, Peter (1)
Wallin, Annika (1)
Kelly, Daniel (1)
Bengtsson-Palme, Joh ... (1)
Nilsson, Henrik (1)
visa fler...
Kelly, Ryan (1)
Li, Ying (1)
Moore, Matthew D. (1)
Liu, Fang (1)
Zhang, Yao (1)
Jin, Yi (1)
Raza, Ali (1)
Rafiq, Muhammad (1)
Zhang, Kai (1)
Khatlani, T (1)
Kahan, Thomas (1)
Sörelius, Karl, 1981 ... (1)
Batra, Jyotsna (1)
Roobol, Monique J (1)
Backman, Lars (1)
Yan, Hong (1)
Schmidt, Axel (1)
Lorkowski, Stefan (1)
Thrift, Amanda G. (1)
Zhang, Wei (1)
Hammerschmidt, Sven (1)
Patil, Chandrashekha ... (1)
Wang, Jun (1)
Pollesello, Piero (1)
Conesa, Ana (1)
El-Esawi, Mohamed A. (1)
Zhang, Weijia (1)
Li, Jian (1)
Marinello, Francesco (1)
Frilander, Mikko J. (1)
Wei, Pan (1)
Badie, Christophe (1)
Zhao, Jing (1)
Li, You (1)
Bansal, Abhisheka (1)
Rahman, Proton (1)
Parchi, Piero (1)
Polz, Martin (1)
Andreasson, Rebecca (1)
Ijzerman, Adriaan P. (1)
Subhash, Santhilal, ... (1)
Quinn, Terence J. (1)
Uversky, Vladimir N. (1)
Gemmill, Alison (1)
visa färre...
Lärosäte
Lunds universitet (40)
Göteborgs universitet (1)
Uppsala universitet (1)
Högskolan i Halmstad (1)
Stockholms universitet (1)
Chalmers tekniska högskola (1)
visa fler...
Karolinska Institutet (1)
visa färre...
Språk
Engelska (40)
Forskningsämne (UKÄ/SCB)
Samhällsvetenskap (18)
Humaniora (16)
Naturvetenskap (14)
Medicin och hälsovetenskap (5)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy