SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Extended search

Träfflista för sökning "hsv:(NATURVETENSKAP) hsv:(Data och informationsvetenskap) ;mspu:(conferencepaper);lar1:(oru)"

Search: hsv:(NATURVETENSKAP) hsv:(Data och informationsvetenskap) > Conference paper > Örebro University

  • Result 1-10 of 1151
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Barreiro, Anabela, et al. (author)
  • Multi3Generation : Multitask, Multilingual, Multimodal Language Generation
  • 2022
  • In: Proceedings of the 23rd Annual Conference of the European Association for Machine Translation. - : European Association for Machine Translation. ; , s. 345-346
  • Conference paper (peer-reviewed)abstract
    • This paper presents the Multitask, Multilingual, Multimodal Language Generation COST Action – Multi3Generatio(CA18231), an interdisciplinary networof research groups working on different aspects of language generation. This "meta-paper" will serve as reference for citationof the Action in future publications. It presents the objectives, challenges and a the links for the achieved outcomes.
  •  
2.
  • Frid, Emma, et al. (author)
  • An Exploratory Study On The Effect Of Auditory Feedback On Gaze Behavior In a Virtual Throwing Task With and Without Haptic Feedback
  • 2017
  • In: Proceedings of the 14th Sound and Music Computing Conference. - Espoo, Finland : Aalto University. - 9789526037295 ; , s. 242-249
  • Conference paper (peer-reviewed)abstract
    • This paper presents findings from an exploratory study on the effect of auditory feedback on gaze behavior. A total of 20 participants took part in an experiment where the task was to throw a virtual ball into a goal in different conditions: visual only, audiovisual, visuohaptic and audio- visuohaptic. Two different sound models were compared in the audio conditions. Analysis of eye tracking metrics indicated large inter-subject variability; difference between subjects was greater than difference between feedback conditions. No significant effect of condition could be observed, but clusters of similar behaviors were identified. Some of the participants’ gaze behaviors appeared to have been affected by the presence of auditory feedback, but the effect of sound model was not consistent across subjects. We discuss individual behaviors and illustrate gaze behavior through sonification of gaze trajectories. Findings from this study raise intriguing questions that motivate future large-scale studies on the effect of auditory feedback on gaze behavior. 
  •  
3.
  • Kristoffersson, Annica, 1980-, et al. (author)
  • Sense of presence in a robotic telepresence domain
  • 2011
  • In: Universal access in human-computer interaction: users diversity, PT 2. - Berlin, Heidelberg : Springer Berlin Heidelberg. - 9783642216626 ; , s. 479-487
  • Conference paper (peer-reviewed)abstract
    • Robotic telepresence offers a means to connect to a remote location via traditional telepresence with the added value of moving and actuating in that location. Recently, there has been a growing focus on the use of robotic telepresence to enhance social interaction among elderly. However for such technology to be accepted it is likely that the experienced presence when using such a system will be important. In this paper, we present results obtained from a training session with a robotic telepresence system when used for the first time by healthcare personnel. The study was quantitative and based on two standard questionnaires used for presence namely, the Temple Presence Inventory (TPI) and the Networked Minds Social Presence Intentory. The study showed that overall the sense of social richness as perceived by the users was high. The users also had a realistic feeling regarding their spatial presence.
  •  
4.
  • Kokic, Mia, et al. (author)
  • Affordance detection for task-specific grasping using deep learning
  • 2017
  • In: 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids). - : IEEE conference proceedings. - 9781538646786 ; , s. 91-98
  • Conference paper (peer-reviewed)abstract
    • In this paper we utilize the notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping. We use convolutional neural networks for encoding and detecting object affordances, class and orientation, which we utilize to formulate grasp constraints. Our approach applies to previously unseen objects from a fixed set of classes and facilitates reasoning about which tasks an object affords and how to grasp it for that task. We evaluate affordance detection on full-view and partial-view synthetic data and compute task-specific grasps for objects that belong to ten different classes and afford five different tasks. We demonstrate the feasibility of our approach by employing an optimization-based grasp planner to compute task-specific grasps.
  •  
5.
  • Thippur, Akshaya, et al. (author)
  • Non-Parametric Spatial Context Structure Learning for Autonomous Understanding of Human Environments
  • 2017
  • In: 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN). - : IEEE. - 9781538635186 ; , s. 1317-1324
  • Conference paper (peer-reviewed)abstract
    • Autonomous scene understanding by object classification today, crucially depends on the accuracy of appearance based robotic perception. However, this is prone to difficulties in object detection arising from unfavourable lighting conditions and vision unfriendly object properties. In our work, we propose a spatial context based system which infers object classes utilising solely structural information captured from the scenes to aid traditional perception systems. Our system operates on novel spatial features (IFRC) that are robust to noisy object detections; It also caters to on-the-fly learned knowledge modification improving performance with practise. IFRC are aligned with human expression of 3D space, thereby facilitating easy HRI and hence simpler supervised learning. We tested our spatial context based system to successfully conclude that it can capture spatio structural information to do joint object classification to not only act as a vision aide, but sometimes even perform on par with appearance based robotic vision.
  •  
6.
  • Jusufi, Ilir, 1983-, et al. (author)
  • TapVis : A Data Visualization Approach for Assessment of Alternating Tapping Performance in Patients with Parkinson's Disease
  • 2018
  • In: EuroVis 2018 - Short Papers. - : Eurographics - European Association for Computer Graphics. - 9783038680604 ; , s. 55-59
  • Conference paper (peer-reviewed)abstract
    • Advancements in telemedicine have been helpful for frequent monitoring of patients with Parkinson's disease (PD) from remote locations and assessment of their individual symptoms and treatment-related complications. These data can be useful for helping clinicians to interpret symptom states and individually tailor the treatments by visualizing the physiological information collected by sensor-based systems. In this paper we present a visualization metaphor that represents symptom information of PD patients during tapping tests performed with a smartphone. The metaphor has been developed and evaluated with a clinician. It enabled the clinician to observe fine motor impairments and identify motor fluctuations regarding several movement aspects of patients that perform the tests from their homes.
  •  
7.
  • Bhatt, Mehul, Professor, 1980-, et al. (author)
  • Deep Semantics for Explainable Visuospatial Intelligence : Perspectives on Integrating Commonsense Spatial Abstractions and Low-Level Neural Features
  • 2019
  • In: Proceedings of the 2019 International Workshop on Neural-Symbolic Learning and Reasoning.
  • Conference paper (peer-reviewed)abstract
    • High-level semantic interpretation of (dynamic) visual imagery calls for general and systematic methods integrating techniques in knowledge representation and computer vision. Towards this, we position "deep semantics", denoting the existence of declarative models –e.g., pertaining "space and motion"– and corresponding formalisation and methods supporting (domain-independent) explainability capabilities such as semantic question-answering, relational (and relationally-driven) visuospatial learning, and (non-monotonic) visuospatial abduction. Rooted in recent work, we summarise and report the status quo on deep visuospatial semantics —and our approach to neurosymbolic integration and explainable visuo-spatial computing in that context— with developed methods and tools in diverse settings such as behavioural research in psychology, art & social sciences, and autonomous driving.
  •  
8.
  • Chatzipetrou, Panagiota, Assistant Professor, 1984-, et al. (author)
  • Statistical Analysis of Requirements Prioritization for Transition to Web Technologies : A Case Study in an Electric Power Organization
  • 2014
  • In: Software Quality. Model-Based Approaches for Advanced Software and Systems Engineering. - Cham : Springer. - 9783319036021 - 9783319036014 ; , s. 63-84
  • Conference paper (peer-reviewed)abstract
    • Transition from an existing IT system to modern Web technologies provides multiple benefits to an organization and its customers. Such a transition in a large organization involves various groups of stakeholders who may prioritize differently the requirements of the software under development. In our case study, the organization is a leading domestic company in the field of electricity power. The existing online system supports the customer service along with the technical activities and has more than 1,500 registered users, while simultaneous access can be reached by 300 users. The paper presents an empirical study where 51 employees in different roles prioritize 18 software requirements using hierarchical cumulative voting. The goal of this study is to test significant differences in prioritization between groups of stakeholders. Statistical methods involving data transformation, ANOVA and Discriminant Analysis were applied to data. The results showed significant differences between roles of the stakeholders in certain requirements.
  •  
9.
  • Bhatt, Mehul, Professor, 1980-, et al. (author)
  • Cognitive Vision and Perception : Deep Semantics Integrating AI and Vision for Reasoning about Space, Motion, and Interaction
  • 2020
  • In: ECAI 2020. - : IOS Press. - 9781643681009 - 9781643681016 ; , s. 2881-2882
  • Conference paper (peer-reviewed)abstract
    • Semantic interpretation of dynamic visuospatial imagery calls for a general and systematic integration of methods in knowledge representation and computer vision. Towards this, we highlight research articulating & developing deep semantics, characterised by the existence of declarative models –e.g., pertaining space and motion– and corresponding formalisation and reasoning methods sup- porting capabilities such as semantic question-answering, relational visuospatial learning, and (non-monotonic) visuospatial explanation. We position a working model for deep semantics by highlighting select recent / closely related works from IJCAI, AAAI, ILP, and ACS. We posit that human-centred, explainable visual sensemaking necessitates both high-level semantics and low-level visual computing, with the highlighted works providing a model for systematic, modular integration of diverse multifaceted techniques developed in AI, ML, and Computer Vision.
  •  
10.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 1151
Type of publication
Type of content
peer-reviewed (1062)
other academic/artistic (85)
pop. science, debate, etc. (4)
Author/Editor
Lilienthal, Achim J. ... (100)
Saffiotti, Alessandr ... (87)
De Raedt, Luc, 1964- (85)
Pecora, Federico, 19 ... (76)
Bhatt, Mehul, Profes ... (72)
Loutfi, Amy, 1978- (60)
show more...
Andreasson, Henrik, ... (40)
Saffiotti, Alessandr ... (30)
Suchan, Jakob (29)
Lilienthal, Achim, 1 ... (28)
Scandurra, Isabella, ... (28)
Grönlund, Åke, 1954- (27)
Gao, Shang, 1982- (27)
Karlsson, Lars (26)
Klügl, Franziska, 19 ... (26)
Martinez Mozos, Osca ... (25)
Dragoni, Nicola, 197 ... (25)
Broxvall, Mathias (25)
Coradeschi, Silvia (24)
Lilienthal, Achim J. (24)
Stoyanov, Todor, 198 ... (23)
Kiselev, Andrey, 198 ... (23)
Duckett, Tom (23)
Stork, Johannes Andr ... (23)
Schaffernicht, Erik, ... (23)
Cesta, Amedeo (22)
Kristoffersson, Anni ... (20)
Magnusson, Martin, 1 ... (20)
Hernandez Bennetts, ... (20)
Loutfi, Amy (19)
Kolkowska, Ella, 197 ... (18)
Hedström, Karin, 196 ... (17)
Saffiotti, Alessandr ... (17)
Krug, Robert, 1981- (17)
Kragic, Danica (16)
Alirezaie, Marjan, 1 ... (16)
Driankov, Dimiter, 1 ... (15)
Kalaykov, Ivan (15)
Karlsson, Lars, 1968 ... (15)
Köckemann, Uwe, 1983 ... (14)
Mansouri, Masoumeh, ... (14)
Kondyli, Vasiliki, 1 ... (14)
Cortellessa, Gabriel ... (14)
Giaretta, Alberto, 1 ... (14)
Nyholm, Dag (13)
Memedi, Mevludin, Ph ... (13)
Rasconi, Riccardo (13)
Bouguerra, Abdelbaki (13)
Karlsson, Fredrik (12)
Schultz, Carl (12)
show less...
University
Mälardalen University (45)
Royal Institute of Technology (38)
Uppsala University (37)
Högskolan Dalarna (30)
Blekinge Institute of Technology (26)
show more...
University of Skövde (25)
Halmstad University (17)
Umeå University (14)
Linköping University (14)
Linnaeus University (13)
RISE (13)
Luleå University of Technology (7)
Karolinska Institutet (6)
Stockholm University (5)
Mid Sweden University (5)
Chalmers University of Technology (5)
Jönköping University (3)
Lund University (3)
Karlstad University (3)
Swedish University of Agricultural Sciences (3)
University of Gothenburg (2)
University of Borås (2)
Malmö University (1)
show less...
Language
English (1146)
Swedish (5)
Research subject (UKÄ/SCB)
Natural sciences (1151)
Engineering and Technology (120)
Social Sciences (72)
Medical and Health Sciences (41)
Humanities (9)
Agricultural Sciences (2)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view