SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Dima Elijs 1990 ) "

Search: WFRF:(Dima Elijs 1990 )

  • Result 1-3 of 3
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Dima, Elijs, 1990- (author)
  • Augmented Telepresence based on Multi-Camera Systems : Capture, Transmission, Rendering, and User Experience
  • 2021
  • Doctoral thesis (other academic/artistic)abstract
    •  Observation and understanding of the world through digital sensors is an ever-increasing part of modern life. Systems of multiple sensors acting together have far-reaching applications in automation, entertainment, surveillance, remote machine control, and robotic self-navigation. Recent developments in digital camera, range sensor and immersive display technologies enable the combination of augmented reality and telepresence into Augmented Telepresence, which promises to enable more effective and immersive forms of interaction with remote environments.The purpose of this work is to gain a more comprehensive understanding of how multi-sensor systems lead to Augmented Telepresence, and how Augmented Telepresence can be utilized for industry-related applications. On the one hand, the conducted research is focused on the technological aspects of multi-camera capture, rendering, and end-to-end systems that enable Augmented Telepresence. On the other hand, the research also considers the user experience aspects of Augmented Telepresence, to obtain a more comprehensive perspective on the application and design of Augmented Telepresence solutions.This work addresses multi-sensor system design for Augmented Telepresence regarding four specific aspects ranging from sensor setup for effective capture to the rendering of outputs for Augmented Telepresence. More specifically, the following problems are investigated: 1) whether multi-camera calibration methods can reliably estimate the true camera parameters; 2) what the consequences are of synchronization errors in a multi-camera system; 3) how to design a scalable multi-camera system for low-latency, real-time applications; and 4) how to enable Augmented Telepresence from multi-sensor systems for mining, without prior data capture or conditioning. The first problem was solved by conducting a comparative assessment of widely available multi-camera calibration methods. A special dataset was recorded, enforcing known constraints on camera ground-truth parameters to use as a reference for calibration estimates. The second problem was addressed by introducing a depth uncertainty model that links the pinhole camera model and synchronization error to the geometric error in the 3D projections of recorded data. The third problem was addressed empirically - by constructing a multi-camera system based on off-the-shelf hardware and a modular software framework. The fourth problem was addressed by proposing a processing pipeline of an augmented remote operation system for augmented and novel view rendering.The calibration assessment revealed that target-based and certain target-less calibration methods are relatively similar in their estimations of the true camera parameters, with one specific exception. For high-accuracy scenarios, even commonly used target-based calibration approaches are not sufficiently accurate with respect to the ground truth. The proposed depth uncertainty model was used to show that converged multi-camera arrays are less sensitive to synchronization errors. The mean depth uncertainty of a camera system correlates to the rendered result in depth-based reprojection as long as the camera calibration matrices are accurate. The presented multi-camera system demonstrates a flexible, de-centralized framework where data processing is possible in the camera, in the cloud, and on the data consumer's side. The multi-camera system is able to act as a capture testbed and as a component in end-to-end communication systems, because of the general-purpose computing and network connectivity support coupled with a segmented software framework. This system forms the foundation for the augmented remote operation system, which demonstrates the feasibility of real-time view generation by employing on-the-fly lidar de-noising and sparse depth upscaling for novel and augmented view synthesis.In addition to the aforementioned technical investigations, this work also addresses the user experience impacts of Augmented Telepresence. The following two questions were investigated: 1) What is the impact of camera-based viewing position in Augmented Telepresence? 2) What is the impact of depth-aiding augmentations in Augmented Telepresence? Both are addressed through a quality of experience study with non-expert participants, using a custom Augmented Telepresence test system for a task-based experiment. The experiment design combines in-view augmentation, camera view selection, and stereoscopic augmented scene presentation via a head-mounted display to investigate both the independent factors and their joint interaction.The results indicate that between the two factors, view position has a stronger influence on user experience. Task performance and quality of experience were significantly decreased by viewing positions that force users to rely on stereoscopic depth perception. However, position-assisting view augmentations can mitigate the negative effect of sub-optimal viewing positions; the extent of such mitigation is subject to the augmentation design and appearance.In aggregate, the works presented in this dissertation cover a broad view of Augmented Telepresence. The individual solutions contribute general insights into Augmented Telepresence system design, complement gaps in the current discourse of specific areas, and provide tools for solving challenges found in enabling the capture, processing, and rendering in real-time-oriented end-to-end systems.
  •  
2.
  • Dima, Elijs, 1990-, et al. (author)
  • Camera and Lidar-based View Generation for Augmented Remote Operation in Mining Applications
  • 2021
  • In: IEEE Access. - : IEEE. - 2169-3536. ; 9, s. 82199-82212
  • Journal article (peer-reviewed)abstract
    • Remote operation of diggers, scalers, and other tunnel-boring machines has significant benefits for worker safety in underground mining. Real-time augmentation of the presented remote views can further improve the operator effectiveness through a more complete presentation of relevant sections of the remote location. In safety-critical applications, such augmentation cannot depend on preconditioned data, nor generate plausible-looking yet inaccurate sections of the view. In this paper, we present a capture and rendering pipeline for real time view augmentation and novel view synthesis that depends only on the inbound data from lidar and camera sensors. We suggest an on-the-fly lidar filtering for reducing point oscillation at no performance cost, and a full rendering process based on lidar depth upscaling and in-view occluder removal from the presented scene. Performance assessments show that the proposed solution is feasible for real-time applications, where per-frame processing fits within the constraints set by the inbound sensor data and within framerate tolerances for enabling effective remote operation.
  •  
3.
  • Dima, Elijs, 1990-, et al. (author)
  • Joint effects of depth‑aiding augmentations and viewing positionson the quality of experience in augmented telepresence
  • 2020
  • In: Quality and User Experience. - Switzerland : Springer Nature. - 2366-0139 .- 2366-0147. ; 5
  • Journal article (peer-reviewed)abstract
    • Virtual and augmented reality is increasingly prevalent in industrial applications, such as remote control of industrial machinery,due to recent advances in head-mounted display technologies and low-latency communications via 5G. However, theinfluence of augmentations and camera placement-based viewing positions on operator performance in telepresence systemsremains unknown. In this paper, we investigate the joint effects of depth-aiding augmentations and viewing positionson the quality of experience for operators in augmented telepresence systems. A study was conducted with 27 non-expertparticipants using a real-time augmented telepresence system to perform a remote-controlled navigation and positioningtask, with varied depth-aiding augmentations and viewing positions. The resulting quality of experience was analyzed viaLikert opinion scales, task performance measurements, and simulator sickness evaluation. Results suggest that reducing thereliance on stereoscopic depth perception via camera placement has a significant benefit to operator performance and qualityof experience. Conversely, the depth-aiding augmentations can partly mitigate the negative effects of inferior viewingpositions. However the viewing-position based monoscopic and stereoscopic depth cues tend to dominate over cues basedon augmentations. There is also a discrepancy between the participants’ subjective opinions on augmentation helpfulness,and its observed effects on positioning task performance.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-3 of 3

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view