SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Gilmartin E.) "

Search: WFRF:(Gilmartin E.)

  • Result 1-3 of 3
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Eyben, F., et al. (author)
  • Socially Aware Many-to-Machine Communication
  • 2012
  • Conference paper (peer-reviewed)abstract
    • This reports describes the output of the project P5: Socially Aware Many-to-Machine Communication (M2M) at the eNTERFACE’12 workshop. In this project, we designed and implemented a new front-end for handling multi-user interaction in a dialog system. We exploit the Microsoft Kinect device for capturing multimodal input and extract some features describing user and face positions. These data are then analyzed in real-time to robustly detect speech and determine both who is speaking and whether the speech is directed to the system or not. This new front-end is integrated to the SEMAINE (Sustained Emotionally colored Machine-human Interaction using Nonverbal Expression) system. Furthermore, a multimodal corpus has been created, capturing all of the system inputs in two different scenarios involving human-human and human-computer interaction.
  •  
2.
  • Csapo, A., et al. (author)
  • Multimodal conversational interaction with a humanoid robot
  • 2012
  • In: 3rd IEEE International Conference on Cognitive Infocommunications, CogInfoCom 2012 - Proceedings. - : IEEE. - 9781467351874 ; , s. 667-672
  • Conference paper (peer-reviewed)abstract
    • The paper presents a multimodal conversational interaction system for the Nao humanoid robot. The system was developed at the 8th International Summer Workshop on Multimodal Interfaces, Metz, 2012. We implemented WikiTalk, an existing spoken dialogue system for open-domain conversations, on Nao. This greatly extended the robot's interaction capabilities by enabling Nao to talk about an unlimited range of topics. In addition to speech interaction, we developed a wide range of multimodal interactive behaviours by the robot, including face-tracking, nodding, communicative gesturing, proximity detection and tactile interrupts. We made video recordings of user interactions and used questionnaires to evaluate the system. We further extended the robot's capabilities by linking Nao with Kinect.
  •  
3.
  • Csapo, A., et al. (author)
  • Open-Domain Conversation with a NAO Robot
  • 2012
  • In: 3rd International Conference on Cognitive Infocommunications (CogInfoCom 2012). - Kosice.
  • Conference paper (peer-reviewed)abstract
    • In this demo, we present a multimodal conversationsystem, implemented using a Nao robot and Wikipedia. The system was developed at the 8th International Workshop on Multimodal Interfaces in Metz, France, 2012. The system is based on an interactive, open-domain spoken dialogue systemcalled WikiTalk, which guides the user through conversations based on the link structure of Wikipedia. In addition to speech interaction, the robot interacts with users by tracking their faces and nodding/gesturing at key points of interest within the Wikipedia text. The proximity detection capabilities of the Nao,as well as its tactile sensors were used to implement context-based interrupts in the dialogue system.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-3 of 3
Type of publication
conference paper (3)
Type of content
peer-reviewed (3)
Author/Editor
Gilmartin, E. (3)
Jokinen, K. (2)
Wilcock, G (2)
Han, J (2)
Csapo, A. (2)
Grizou, J. (2)
show more...
Meena, Raveesh (2)
Anastasiou, D. (2)
Stefanov, Kalin (1)
Eyben, F. (1)
Joder, C. (1)
Marchi, E. (1)
Munier, C. (1)
Weninger, F. (1)
Schuller, B. (1)
show less...
University
Royal Institute of Technology (3)
Language
English (3)
Research subject (UKÄ/SCB)
Natural sciences (3)
Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view