SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Eyben F.) "

Search: WFRF:(Eyben F.)

  • Result 1-3 of 3
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Eyben, F., et al. (author)
  • Socially Aware Many-to-Machine Communication
  • 2012
  • Conference paper (peer-reviewed)abstract
    • This reports describes the output of the project P5: Socially Aware Many-to-Machine Communication (M2M) at the eNTERFACE’12 workshop. In this project, we designed and implemented a new front-end for handling multi-user interaction in a dialog system. We exploit the Microsoft Kinect device for capturing multimodal input and extract some features describing user and face positions. These data are then analyzed in real-time to robustly detect speech and determine both who is speaking and whether the speech is directed to the system or not. This new front-end is integrated to the SEMAINE (Sustained Emotionally colored Machine-human Interaction using Nonverbal Expression) system. Furthermore, a multimodal corpus has been created, capturing all of the system inputs in two different scenarios involving human-human and human-computer interaction.
  •  
2.
  • Scherer, K. R., et al. (author)
  • The expression of emotion in the singing voice : Acoustic patterns in vocal performance
  • 2017
  • In: Journal of the Acoustical Society of America. - : Acoustical Society of America (ASA). - 0001-4966 .- 1520-8524. ; 142:4, s. 1805-1815
  • Journal article (peer-reviewed)abstract
    • There has been little research on the acoustic correlates of emotional expression in the singing voice. In this study, two pertinent questions are addressed: How does a singer's emotional interpretation of a musical piece affect acoustic parameters in the sung vocalizations? Are these patterns specific enough to allow statistical discrimination of the intended expressive targets? Eight professional opera singers were asked to sing the musical scale upwards and downwards (using meaningless content) to express different emotions, as if on stage. The studio recordings were acoustically analyzed with a standard set of parameters. The results show robust vocal signatures for the emotions studied. Overall, there is a major contrast between sadness and tenderness on the one hand, and anger, joy, and pride on the other. This is based on low vs high levels on the components of loudness, vocal dynamics, high perturbation variation, and a tendency for high low-frequency energy. This pattern can be explained by the high power and arousal characteristics of the emotions with high levels on these components. A multiple discriminant analysis yields classification accuracy greatly exceeding chance level, confirming the reliability of the acoustic patterns.
  •  
3.
  • Vögel, H. -J, et al. (author)
  • Emotion-awareness for intelligent vehicle assistants : A research agenda
  • 2018
  • In: Proceedings - International Conference on Software Engineering. - New York, NY, USA : IEEE Computer Society. - 9781450357395 ; , s. 11-15
  • Conference paper (peer-reviewed)abstract
    • EVA1 is describing a new class of emotion-aware autonomous systems delivering intelligent personal assistant functionalities. EVA requires a multi-disciplinary approach, combining a number of critical building blocks into a cybernetics systems/software architecture: emotion aware systems and algorithms, multimodal interaction design, cognitive modelling, decision making and recommender systems, emotion sensing as feedback for learning, and distributed (edge) computing delivering cognitive services.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-3 of 3

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view