SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Qvarfordt Pernilla) srt2:(2000-2004)"

Search: WFRF:(Qvarfordt Pernilla) > (2000-2004)

  • Result 1-7 of 7
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Bäckvall, P, et al. (author)
  • Using Fisheye for Navigation on Small Displays
  • 2000
  • In: Nordic Conference on Computer-Human Interaction,2000.
  • Conference paper (peer-reviewed)abstract
    • In this paper we present a solution to the problem of visualising large amount of hierarchical information structures on small computer screens. Our solution has been implemented as a prototype for mobile use on a hand-held computer using Microsoft Pocket PC with a screen size of 240x320 pixels. The prototype uses the same information as service engineers use on stationary computers. The visualisation technique we used for displaying information is based on fisheye technique, which we have found functional on small displays. The prototype is domain independent; the information is easily interchangeable. A consequence of the result presented here is that the possibility of using hand-held computers in different types of contexts increases.
  •  
2.
  • Lundberg, Jonas, et al. (author)
  • "The snatcher catcher" - an interactive refrigerator
  • 2002
  • In: Proceedings of the second Nordic conference on Human-computer interaction. ; , s. 209-211
  • Conference paper (other academic/artistic)abstract
    • In order to provoke a debate about the use of new technology, the Snatcher Catcher, an intrusive interactive refrigerator that keeps record of the items in it, was created. In this paper we present the fridge, and how we used it in a provocative installation. The results showed that the audience was provoked, and that few people wanted to have the fridge in their surroundings.
  •  
3.
  • Qvarfordt, Pernilla, 1972- (author)
  • Eyes on multimodal interaction
  • 2004
  • Doctoral thesis (other academic/artistic)abstract
    • Advances in technology are making it possible for users to interact with computers by various modalities, often through speech and gesture. Such multimodal interaction is attractive because it mimics the patterns and skills in natural human-human communication. To date, research in this area has primarily focused on giving commands to computers. The focus of this thesis shifts from commands to dialogue interaction. The work presented here is divided into two parts. The first part looks at the impact of the characteristics of the spoken feedback on users' experience of a multimodal dialogue system. The second part investigates whether and how eye-gaze can be utilized in multimodal dialogue systems.Although multimodal interaction has attracted many researchers, little attention has been paid to how users experience such systems. The first part of this thesis investigates what makes multimodal dialogue systems either human-like or tool-like, and what qualities are most important to users. In a simulated multimodal timetable information system users were exposed to different levels of spoken feedback. The results showed that the users preferred the system to be either clearly tool-like, with no spoken words, or clearly human-like, with complete and natural utterances. Furthermore, the users' preference for a human-like multimodal system tended to be much higher after they had actual experience than beforehand based on imagination.Eye-gaze plays a powerful role in human communication. In a computer-mediated collaborative task involving a tourist and a tourist consultant, the second part of this thesis starts with examining the users' eye-gaze patterns and their functions in deictic referencing, interest detection, topic switching, ambiguity reduction, and establishing common ground in a dialogue. Based on the results of this study, an interactive tourist advisor system that encapsulates some of the identified patterns and regularities was developed. In a "stress test" experiment based on eye-gaze patterns only, the developed system conversed with users to help them plan their conference trips. Results demonstrated thateye-gaze can play an assistive role in managing future multimodal human-computer dialogues.
  •  
4.
  • Qvarfordt, Pernilla, et al. (author)
  • First-Personness Approach to Co-operative Multimodal Interaction
  • 2000
  • In: Advances in Multimodal Interfaces — ICMI 2000. - : Springer Berlin/Heidelberg. - 9783540411802 - 9783540400639 - 3540411801 ; , s. 650-657
  • Conference paper (peer-reviewed)abstract
    • Using natural language in addition to graphical user interfaces is often used as an argument for a better interaction. However, just adding spoken language might not lead to a better interaction. In this article we will look deeper into how the spoken language should be used in a co-operative multimodal interface. Based on empirical investigations, we have noticed that for multimodal information systems efficiency is especially important. Our results indicate that efficiency can be divided into functional and linguistic efficiency. Functional efficiency has a tight relation to solving the task fast. Linguistic efficiency concerns how to make the contributions meaningful and appropriate in the context. For linguistic efficiency user's perception of first-personness [1] is important, as well as giving users support for understanding the interface, and to adapt the responses to the user. In this article focus is on linguistic efficiency for a multimodal timetable information system.
  •  
5.
  •  
6.
  • Qvarfordt, Pernilla, et al. (author)
  • The Role of Spoken Feedback in Experiencing Multimodal Interfaces as Human-like
  • 2003
  • In: Proceedings of ICMI'03, Vancouver, Canada, 2003.. - New York, NY, USA : ACM Digital Library. - 1581136218 ; , s. 250-257
  • Conference paper (peer-reviewed)abstract
    • If user interfaces should be made human-like vs. tool-like has been debated in the HCI field, and this debate affects the development of multimodal interfaces. However, little empirical study has been done to support either view so far. Even if there is evidence that humans interpret media as other humans, this does not mean that humans experience the interfaces as human-like. We studied how people experience a multimodal timetable system with varying degree of human-like spoken feedback in a Wizard-of-Oz study. The results showed that users' views and preferences lean significantly towards anthropomorphism after actually experiencing the multimodal timetable system. The more human-like the spoken feedback is the more participants preferred the system to be human-like. The results also showed that the users experience matched their preferences. This shows that in order to appreciate a human-like interface, the users have to experience it.
  •  
7.
  • Qvarfordt, Pernilla, 1972- (author)
  • User experience of spoken feedback in multimodal interaction
  • 2003
  • Licentiate thesis (other academic/artistic)abstract
    • The area of multimodal interaction is fast growing, and is showing promising results in making the interaction more efficient and Robust. These results are mainly based on better recognizers, and studies of how users interact with particular multimodal systems. However, little research has been done on users- subjective experience of using multimodal interfaces, which is an important aspect for acceptance of multimodal interfaces. The work presented in this thesis focuses on how users experience multimodal interaction, and what qualities are important for the interaction. Traditional user interfaces and speech and multimodal interfaces are often described as having different interaction character (handlingskaraktär). Traditional user interfaces are often seen as tools, while speech and multimodal interfaces are often described as dialogue partners. Researchers have ascribed different qualities as important for performance and satisfaction for these two interaction characters. These statements are examined by studying how users react to a multimodal timetable system. In this study spoken feedback was used to make the interaction more human-like. A Wizard-of-Oz method was used to simulate the recognition and generation engines in the timetable system for public transportation. The results from the study showed that users experience the system having an interaction character, and that spoken feedback influences that experience. The more spoken feedback the system gives, the more users will experience the system as a dialogue partner. The evaluation of the qualities of interaction showed that user preferred no spoken feedback, or elaborated spoken feedback. Limited spoken feedback only distracted the users. 
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-7 of 7

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view