SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Gärtner Erik) "

Sökning: WFRF:(Gärtner Erik)

  • Resultat 1-8 av 8
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Domova, Veronika, 1987-, et al. (författare)
  • Improving Usability of Search and Rescue Decision Support Systems : WARA-PS Case Study
  • 2020
  • Ingår i: Proceedings 25th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2020. - Vienna, Austria : Institute of Electrical and Electronics Engineers (IEEE). - 9781728189574 - 9781728189567 ; 2020-September, s. 1251-1254
  • Konferensbidrag (refereegranskat)abstract
    • Novel autonomous search and rescue systems, although powerful, still require a human decision-maker involvement. In this project, we focus on the human aspect of one such novel autonomous SAR system. Relying on the knowledge gained in a field study, as well as through the literature, we introduced several extensions to the system that allowed us to achieve a more user-centered interface. In the evaluation session with a rescue service specialist, we received positive feedback and defined potential directions for future work.
  •  
2.
  • Gärtner, Erik (författare)
  • Active and Physics-Based Human Pose Reconstruction
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Perceiving humans is an important and complex problem within computervision. Its significance is derived from its numerous applications, suchas human-robot interaction, virtual reality, markerless motion capture,and human tracking for autonomous driving. The difficulty lies in thevariability in human appearance, physique, and plausible body poses. Inreal-world scenes, this is further exacerbated by difficult lightingconditions, partial occlusions, and the depth ambiguity stemming fromthe loss of information during the 3d to 2d projection. Despite thesechallenges, significant progress has been made in recent years,primarily due to the expressive power of deep neural networks trained onlarge datasets. However, creating large-scale datasets with 3dannotations is expensive, and capturing the vast diversity of the realworld is demanding. Traditionally, 3d ground truth is captured usingmotion capture laboratories that require large investments. Furthermore,many laboratories cannot easily accommodate athletic and dynamicmotions. This thesis studies three approaches to improving visualperception, with emphasis on human pose estimation, that can complementimprovements to the underlying predictor or training data.The first two papers present active human pose estimation, where areinforcement learning agent is tasked with selecting informativeviewpoints to reconstruct subjects efficiently. The papers discard thecommon assumption that the input is given and instead allow the agent tomove to observe subjects from desirable viewpoints, e.g., those whichavoid occlusions and for which the underlying pose estimator has a lowprediction error.The third paper introduces the task of embodied visual active learning,which goes further and assumes that the perceptual model is notpre-trained. Instead, the agent is tasked with exploring its environmentand requesting annotations to refine its visual model. Learning toexplore novel scenarios and efficiently request annotation for new datais a step towards life-long learning, where models can evolve beyondwhat they learned during the initial training phase. We study theproblem for segmentation, though the idea is applicable to otherperception tasks.Lastly, the final two papers propose improving human pose estimation byintegrating physical constraints. These regularize the reconstructedmotions to be physically plausible and serve as a complement to currentkinematic approaches. Whether a motion has been observed in the trainingdata or not, the predictions should obey the laws of physics. Throughintegration with a physical simulator, we demonstrate that we can reducereconstruction artifacts and enforce, e.g., contact constraints.
  •  
3.
  • Gärtner, Erik, et al. (författare)
  • Deep Reinforcement Learning for Active Human Pose Estimation
  • 2020
  • Ingår i: AAAI 2020 - 34th AAAI Conference on Artificial Intelligence. - : Association for the Advancement of Artificial Intelligence (AAAI). - 2159-5399 .- 2374-3468. ; 34:07, s. 10835-10844
  • Konferensbidrag (refereegranskat)abstract
    • Most 3d human pose estimation methods assume that input – be it images of a scene collected from one or several viewpoints, or from a video – is given. Consequently, they focus on estimates leveraging prior knowledge and measurement by fusing information spatially and/or temporally, whenever available. In this paper we address the problem of an active observer with freedom to move and explore the scene spatially – in ‘time-freeze’ mode – and/or temporally, by selecting informative viewpoints that improve its estimation accuracy. Towards this end, we introduce Pose-DRL, a fully trainable deep reinforcement learning-based active pose estimation architecture which learns to select appropriate views, in space and time, to feed an underlying monocular pose estimator. We evaluate our model using single- and multi-target estimators with strong result in both settings. Our system further learns automatic stopping conditions in time and transition functions to the next temporal processing step in videos. In extensive experiments with the Panoptic multi-view setup, and for complex scenes containing multiple people, we show that our model learns to select viewpoints that yield significantly more accurate pose estimates compared to strong multi-view baselines.
  •  
4.
  • Gärtner, Erik, et al. (författare)
  • Differentiable Dynamics for Articulated 3d Human Motion Reconstruction
  • 2022
  • Ingår i: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. - 9781665469470 - 9781665469463
  • Konferensbidrag (refereegranskat)abstract
    • We introduce DiffPhy, a differentiable physics-based model for articulated 3d human motion reconstruction from video. Applications of physics-based reasoning in human motion analysis have so far been limited, both by the complexity of constructing adequate physical models of articulated human motion, and by the formidable challenges of performing stable and efficient inference with physics in the loop. We jointly address such modeling and inference challenges by proposing an approach that combines a physically plausible body representation with anatomical joint limits, a differentiable physics simulator, and optimization techniques that ensure good performance and robustness to suboptimal local optima. In contrast to several recent methods [39], [42], [55], our approach readily supports full-body contact including interactions with objects in the scene. Most importantly, our model connects end-to-end with images, thus supporting direct gradient-based physics optimization by means of image-based loss functions. We validate the model by demonstrating that it can accurately reconstruct physically plausible 3d human motion from monocular video, both on public benchmarks with available 3d ground-truth, and on videos from the internet.
  •  
5.
  • Gärtner, Erik, et al. (författare)
  • Trajectory Optimization for Physics-Based Reconstruction of 3d Human Pose from Monocular Video
  • 2022
  • Ingår i: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. - 9781665469470 - 9781665469463
  • Konferensbidrag (refereegranskat)abstract
    • We focus on the task of estimating a physically plausi-ble articulated human motion from monocular video. Ex-isting approaches that do not consider physics often pro-duce temporally inconsistent output with motion artifacts, while state-of-the-art physics-based approaches have either been shown to work only in controlled laboratory conditions or consider simplified body-ground contact limited to feet. This paper explores how these shortcomings can be addressed by directly incorporating a fully-featured physics engine into the pose estimation process. Given an uncon-trolled, real-world scene as input, our approach estimates the ground-plane location and the dimensions of the physi-cal body model. It then recovers the physical motion by per-forming trajectory optimization. The advantage of our for-mulation is that it readily generalizes to a variety of scenes that might have diverse ground properties and supports any form of self-contact and contact between the articu-lated body and scene geometry. We show that our approach achieves competitive results with respect to existing physics-based methods on the Human3.6M benchmark [13], while being directly applicable without re-training to more complex dynamic motions from the AIST benchmark [36] and to uncontrolled internet videos.
  •  
6.
  • Lu, Yingchang, et al. (författare)
  • New loci for body fat percentage reveal link between adiposity and cardiometabolic disease risk
  • 2016
  • Ingår i: Nature Communications. - : Springer Science and Business Media LLC. - 2041-1723. ; 7
  • Tidskriftsartikel (refereegranskat)abstract
    • To increase our understanding of the genetic basis of adiposity and its links to cardiometabolic disease risk, we conducted a genome-wide association meta-analysis of body fat percentage (BF%) in up to 100,716 individuals. Twelve loci reached genome-wide significance (P<5 × 10(-8)), of which eight were previously associated with increased overall adiposity (BMI, BF%) and four (in or near COBLL1/GRB14, IGF2BP1, PLA2G6, CRTC1) were novel associations with BF%. Seven loci showed a larger effect on BF% than on BMI, suggestive of a primary association with adiposity, while five loci showed larger effects on BMI than on BF%, suggesting association with both fat and lean mass. In particular, the loci more strongly associated with BF% showed distinct cross-phenotype association signatures with a range of cardiometabolic traits revealing new insights in the link between adiposity and disease risk.
  •  
7.
  • Nilsson, David, et al. (författare)
  • Embodied Visual Active Learning for Semantic Segmentation
  • 2021
  • Ingår i: Proceedings of the AAAI Conference on Artificial Intelligence. - : Association for the Advancement of Artificial Intelligence (AAAI). - 2374-3468 .- 2159-5399. - 9781713835974 ; , s. 2373-2383
  • Konferensbidrag (refereegranskat)abstract
    • We study the task of embodied visual active learning, where an agent is set to explore a 3d environment with the goal to acquire visual scene understanding by actively selecting views for which to request annotation. While accurate on some benchmarks, today's deep visual recognition pipelines tend to not generalize well in certain real-world scenarios, or for unusual viewpoints. Robotic perception, in turn, requires the capability to refine the recognition capabilities for the conditions where the mobile system operates, including cluttered indoor environments or poor illumination. This motivates the proposed task, where an agent is placed in a novel environment with the objective of improving its visual recognition capability. To study embodied visual active learning, we develop a battery of agents - both learnt and pre-specified - and with different levels of knowledge of the environment. The agents are equipped with a semantic segmentation network and seek to acquire informative views, move and explore in order to propagate annotations in the neighbourhood of those views, then refine the underlying segmentation network by online retraining. The trainable method uses deep reinforcement learning with a reward function that balances two competing objectives: task performance, represented as visual recognition accuracy, which requires exploring the environment, and the necessary amount of annotated data requested during active exploration. We extensively evaluate the proposed models using the photorealistic Matterport3D simulator and show that a fully learnt method outperforms comparable pre-specified counterparts, even when requesting fewer annotations.
  •  
8.
  • Pirinen, Aleksis, et al. (författare)
  • Domes to drones : Self-supervised active triangulation for 3d human pose reconstruction
  • 2019
  • Ingår i: Advances in Neural Information Processing Systems 32 (NeurIPS 2019). - 1049-5258. - 9781713807933 ; 32
  • Konferensbidrag (refereegranskat)abstract
    • Existing state-of-the-art estimation systems can detect 2d poses of multiple people in images quite reliably. In contrast, 3d pose estimation from a single image is ill-posed due to occlusion and depth ambiguities. Assuming access to multiple cameras, or given an active system able to position itself to observe the scene from multiple viewpoints, reconstructing 3d pose from 2d measurements becomes well-posed within the framework of standard multi-view geometry. Less clear is what is an informative set of viewpoints for accurate 3d reconstruction, particularly in complex scenes, where people are occluded by others or by scene objects. In order to address the view selection problem in a principled way, we here introduce ACTOR, an active triangulation agent for 3d human pose reconstruction. Our fully trainable agent consists of a 2d pose estimation network (any of which would work) and a deep reinforcement learning-based policy for camera viewpoint selection. The policy predicts observation viewpoints, the number of which varies adaptively depending on scene content, and the associated images are fed to an underlying pose estimator. Importantly, training the policy requires no annotations - given a 2d pose estimator, ACTOR is trained in a self-supervised manner. In extensive evaluations on complex multi-people scenes filmed in a Panoptic dome, under multiple viewpoints, we compare our active triangulation agent to strong multi-view baselines, and show that ACTOR produces significantly more accurate 3d pose reconstructions. We also provide a proof-of-concept experiment indicating the potential of connecting our view selection policy to a physical drone observer.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-8 av 8
Typ av publikation
konferensbidrag (6)
tidskriftsartikel (1)
doktorsavhandling (1)
Typ av innehåll
refereegranskat (7)
övrigt vetenskapligt/konstnärligt (1)
Författare/redaktör
Nilsson, David (1)
Vandenput, Liesbeth, ... (1)
Salomaa, Veikko (1)
Jula, Antti (1)
Perola, Markus (1)
Lind, Lars (1)
visa fler...
Raitakari, Olli T (1)
Cederholm, Tommy (1)
Campbell, Harry (1)
Rudan, Igor (1)
Ohlsson, Claes, 1965 (1)
Deloukas, Panos (1)
Bishop, D Timothy (1)
Hernandez, Dena (1)
Shungin, Dmitry (1)
North, Kari E. (1)
Wareham, Nicholas J. (1)
Stancáková, Alena (1)
Kuusisto, Johanna (1)
Laakso, Markku (1)
Ahluwalia, Tarunveer ... (1)
Forsén, Tom (1)
McCarthy, Mark I (1)
Linneberg, Allan (1)
Grarup, Niels (1)
Pedersen, Oluf (1)
Hansen, Torben (1)
Demirkan, Ayse (1)
van Duijn, Cornelia ... (1)
Qi, Qibin (1)
Jørgensen, Torben (1)
Langenberg, Claudia (1)
Boehnke, Michael (1)
Mohlke, Karen L (1)
Scott, Robert A (1)
Ingelsson, Erik (1)
Li, Xin (1)
Hunter, David J (1)
Havulinna, Aki S. (1)
Ripatti, Samuli (1)
Kähönen, Mika (1)
Lehtimäki, Terho (1)
Verweij, Niek (1)
Shuldiner, Alan R. (1)
Koskinen, Seppo (1)
Mangino, Massimo (1)
Oostra, Ben A. (1)
Gieger, Christian (1)
Peters, Annette (1)
Strauch, Konstantin (1)
visa färre...
Lärosäte
Lunds universitet (8)
Göteborgs universitet (1)
Umeå universitet (1)
Kungliga Tekniska Högskolan (1)
Uppsala universitet (1)
Linköpings universitet (1)
visa fler...
Karolinska Institutet (1)
visa färre...
Språk
Engelska (8)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (7)
Teknik (2)
Medicin och hälsovetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy