SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Gustafson AL) "

Sökning: WFRF:(Gustafson AL)

  • Resultat 1-21 av 21
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Adrian-Martinez, S., et al. (författare)
  • A first search for coincident gravitational waves and high energy neutrinos using LIGO, Virgo and ANTARES data from 2007
  • 2013
  • Ingår i: Journal of Cosmology and Astroparticle Physics. - : IOP Publishing. - 1475-7516. ; :6
  • Tidskriftsartikel (refereegranskat)abstract
    • We present the results of the first search for gravitational wave bursts associated with high energy neutrinos. Together, these messengers could reveal new, hidden sources that are not observed by conventional photon astronomy, particularly at high energy. Our search uses neutrinos detected by the underwater neutrino telescope ANTARES in its 5 line configuration during the period January - September 2007, which coincided with the fifth and first science runs of LIGO and Virgo, respectively. The LIGO-Virgo data were analysed for candidate gravitational-wave signals coincident in time and direction with the neutrino events. No significant coincident events were observed. We place limits on the density of joint high energy neutrino - gravitational wave emission events in the local universe, and compare them with densities of merger and core-collapse events.
  •  
2.
  • 2017
  • Ingår i: Physical Review D. - 2470-0010 .- 2470-0029. ; 96:2
  • Tidskriftsartikel (refereegranskat)
  •  
3.
  • Al Moubayed, Samer, et al. (författare)
  • Analysis of gaze and speech patterns in three-party quiz game interaction
  • 2013
  • Ingår i: Interspeech 2013. - : The International Speech Communication Association (ISCA). ; , s. 1126-1130
  • Konferensbidrag (refereegranskat)abstract
    • In order to understand and model the dynamics between interaction phenomena such as gaze and speech in face-to-face multiparty interaction between humans, we need large quantities of reliable, objective data of such interactions. To date, this type of data is in short supply. We present a data collection setup using automated, objective techniques in which we capture the gaze and speech patterns of triads deeply engaged in a high-stakes quiz game. The resulting corpus consists of five one-hour recordings, and is unique in that it makes use of three state-of-the-art gaze trackers (one per subject) in combination with a state-of-theart conical microphone array designed to capture roundtable meetings. Several video channels are also included. In this paper we present the obstacles we encountered and the possibilities afforded by a synchronised, reliable combination of large-scale multi-party speech and gaze data, and an overview of the first analyses of the data. Index Terms: multimodal corpus, multiparty dialogue, gaze patterns, multiparty gaze.
  •  
4.
  • Al Moubayed, Samer, et al. (författare)
  • Furhat goes to Robotville: a large-scale multiparty human-robot interaction data collection in a public space
  • 2012
  • Ingår i: Proc of LREC Workshop on Multimodal Corpora. - Istanbul, Turkey.
  • Konferensbidrag (refereegranskat)abstract
    • In the four days of the Robotville exhibition at the London Science Museum, UK, during which the back-projected head Furhat in a situated spoken dialogue system was seen by almost 8 000 visitors, we collected a database of 10 000 utterances spoken to Furhat in situated interaction. The data collection is an example of a particular kind of corpus collection of human-machine dialogues in public spaces that has several interesting and specific characteristics, both with respect to the technical details of the collection and with respect to the resulting corpus contents. In this paper, we take the Furhat data collection as a starting point for a discussion of the motives for this type of data collection, its technical peculiarities and prerequisites, and the characteristics of the resulting corpus.
  •  
5.
  • Al Moubayed, Samer, et al. (författare)
  • Human-robot Collaborative Tutoring Using Multiparty Multimodal Spoken Dialogue
  • 2014
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we describe a project that explores a novel experi-mental setup towards building a spoken, multi-modally rich, and human-like multiparty tutoring robot. A human-robotinteraction setup is designed, and a human-human dialogue corpus is collect-ed. The corpus targets the development of a dialogue system platform to study verbal and nonverbaltutoring strategies in mul-tiparty spoken interactions with robots which are capable of spo-ken dialogue. The dialogue task is centered on two participants involved in a dialogueaiming to solve a card-ordering game. Along with the participants sits a tutor (robot) that helps the par-ticipants perform the task, and organizes and balances their inter-action. Differentmultimodal signals captured and auto-synchronized by different audio-visual capture technologies, such as a microphone array, Kinects, and video cameras, were coupled with manual annotations. These are used build a situated model of the interaction based on the participants personalities, their state of attention, their conversational engagement and verbal domi-nance, and how that is correlated with the verbal and visual feed-back, turn-management, and conversation regulatory actions gen-erated by the tutor. Driven by the analysis of the corpus, we will show also the detailed design methodologies for an affective, and multimodally rich dialogue system that allows the robot to meas-ure incrementally the attention states, and the dominance for each participant, allowing the robot head Furhat to maintain a well-coordinated, balanced, and engaging conversation, that attempts to maximize the agreement and the contribution to solve the task. This project sets the first steps to explore the potential of us-ing multimodal dialogue systems to build interactive robots that can serve in educational, team building, and collaborative task solving applications.
  •  
6.
  • Al Moubayed, Samer, et al. (författare)
  • Multimodal Multiparty Social Interaction with the Furhat Head
  • 2012
  • Konferensbidrag (refereegranskat)abstract
    • We will show in this demonstrator an advanced multimodal and multiparty spoken conversational system using Furhat, a robot head based on projected facial animation. Furhat is a human-like interface that utilizes facial animation for physical robot heads using back-projection. In the system, multimodality is enabled using speech and rich visual input signals such as multi-person real-time face tracking and microphone tracking. The demonstrator will showcase a system that is able to carry out social dialogue with multiple interlocutors simultaneously with rich output signals such as eye and head coordination, lips synchronized speech synthesis, and non-verbal facial gestures used to regulate fluent and expressive multiparty conversations.
  •  
7.
  • Al Moubayed, Samer, et al. (författare)
  • Talking with Furhat - multi-party interaction with a back-projected robot head
  • 2012
  • Ingår i: Proceedings of Fonetik 2012. - Gothenberg, Sweden. ; , s. 109-112
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • This is a condensed presentation of some recent work on a back-projected robotic head for multi-party interaction in public settings. We will describe some of the design strategies and give some preliminary analysis of an interaction database collected at the Robotville exhibition at the London Science Museum
  •  
8.
  • Blomberg, Mats, et al. (författare)
  • Children and adults in dialogue with the robot head Furhat - corpus collection and initial analysis
  • 2012
  • Ingår i: Proceedings of WOCCI. - Portland, OR : The International Society for Computers and Their Applications (ISCA).
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents a large scale study in a public museum setting, where a back-projected robot head interacted with the visitors in multi-party dialogue. The exhibition was seen by almost 8000 visitors, out of which several thousand interacted with the system. A considerable portion of the visitors were children from around 4 years of age and adolescents. The collected corpus consists of about 10.000 user utterances. The head and a multi-party dialogue design allow the system to regulate the turn-taking behaviour, and help the robot to effectively obtain information from the general public. The commercial speech recognition component, supposedly designed for adult speakers, had considerably lower accuracy for the children. Methods are proposed for improving the performance for that speaker category.
  •  
9.
  • Castaño Arranz, Miguel, et al. (författare)
  • A generic framework for data quality analytics
  • 2020
  • Ingår i: International Journal of COMADEM. - : COMADEM International. - 1363-7681. ; 23:1, s. 31-38
  • Tidskriftsartikel (refereegranskat)abstract
    • The challenge of generalizing Data Quality assessment is hindered by the fact that Data Quality requisites depend on the purpose for which the data will be used and on the subjectivity of the data consumer. The approach proposed in this paper to address this challenge is to employ a semi-automated user-guided Data Quality assessment. This paper introduces a generic framework for data quality analytics which is mainly composed by a set of software units to perform semi-automated Data Quality analytics and a set of Graphical User Interfaces to enable the user to guide the Data Quality assessment. The framework has been implemented and can be customized according to the needs of the purpose and of the consumer. The framework has been instantiated in a case study on Long-hole drill rigs, where several Data Quality issues have been discovered and their root cause investigated.
  •  
10.
  • DONOVAN, M, et al. (författare)
  • The cellular retinoic acid binding proteins
  • 1995
  • Ingår i: The Journal of steroid biochemistry and molecular biology. - 0960-0760. ; 53, s. 459-
  • Tidskriftsartikel (refereegranskat)
  •  
11.
  • Edlund, Jens, et al. (författare)
  • Audience response system based annotation of speech
  • 2013
  • Ingår i: Proceedings of Fonetik 2013. - Linköping : Linköping University. - 9789175195827 - 9789175195797 ; , s. 13-16
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Manual annotators are often used to label speech. The task is associated with high costs and with great time consumption. We suggest to reach an increased throughput while maintaining a high measure of experimental control by borrowing from the Audience Response Systems used in the film and television industries, and demonstrate a cost-efficient setup for rapid, plenary annotation of phenomena occurring in recorded speech together with some results from studies we have undertaken to quantify the temporal precision and reliability of such annotations.
  •  
12.
  •  
13.
  •  
14.
  •  
15.
  •  
16.
  •  
17.
  •  
18.
  • Mirnig, Nicole, et al. (författare)
  • Face-To-Face With A Robot : What do we actually talk about?
  • 2013
  • Ingår i: International Journal of Humanoid Robotics. - 0219-8436. ; 10:1, s. 1350011-
  • Tidskriftsartikel (refereegranskat)abstract
    • While much of the state-of-the-art research in human-robot interaction (HRI) investigates task-oriented interaction, this paper aims at exploring what people talk about to a robot if the content of the conversation is not predefined. We used the robot head Furhat to explore the conversational behavior of people who encounter a robot in the public setting of a robot exhibition in a scientific museum, but without a predefined purpose. Upon analyzing the conversations, it could be shown that a sophisticated robot provides an inviting atmosphere for people to engage in interaction and to be experimental and challenge the robot's capabilities. Many visitors to the exhibition were willing to go beyond the guiding questions that were provided as a starting point. Amongst other things, they asked Furhat questions concerning the robot itself, such as how it would define a robot, or if it plans to take over the world. People were also interested in the feelings and likes of the robot and they asked many personal questions - this is how Furhat ended up with its first marriage proposal. People who talked to Furhat were asked to complete a questionnaire on their assessment of the conversation, with which we could show that the interaction with Furhat was rated as a pleasant experience.
  •  
19.
  •  
20.
  • Simon, A, et al. (författare)
  • Intracellular localization and membrane topology of 11-cis retinol dehydrogenase in the retinal pigment epithelium suggest a compartmentalized synthesis of 11-cis retinaldehyde
  • 1999
  • Ingår i: Journal of cell science. - : The Company of Biologists. - 0021-9533 .- 1477-9137. ; 112112 ( Pt 4):4, s. 549-558
  • Tidskriftsartikel (refereegranskat)abstract
    • 11-cis retinol dehydrogenase (EC 1.1.1.105) catalyses the last step in the biosynthetic pathway generating 11-cis retinaldehyde, the common chromophore of all visual pigments in higher animals. The enzyme is abundantly expressed in retinal pigment epithelium of the eye and is a member of the short chain dehydrogenase/reductase superfamily. In this work we demonstrate that a majority of 11-cis retinol dehydrogenase is associated with the smooth ER in retinal pigment epithelial cells and that the enzyme is an integral membrane protein, anchored to membranes by two hydrophobic peptide segments. The catalytic domain of the enzyme is confined to a lumenal compartment and is not present on the cytosolic aspect of membranes. Thus, the subcellular localization and the membrane topology of 11-cis retinol dehydrogenase suggest that generation of 11-cis retinaldehyde is a compartmentalized process.
  •  
21.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-21 av 21

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy