SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Cui Zhaopeng) "

Sökning: WFRF:(Cui Zhaopeng)

  • Resultat 1-5 av 5
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Cui, Zhaopeng, et al. (författare)
  • Real-Time Dense Mapping for Self-Driving Vehicles using Fisheye Cameras
  • 2019
  • Ingår i: Proceedings - IEEE International Conference on Robotics and Automation. - 1050-4729. - 9781538660263 ; 2019-May, s. 6087-6093
  • Konferensbidrag (refereegranskat)abstract
    • We present a real-time dense geometric mapping algorithm for large-scale environments. Unlike existing methods which use pinhole cameras, our implementation is based on fisheye cameras whose large field of view benefits various computer vision applications for self-driving vehicles such as visual-inertial odometry, visual localization, and object detection. Our algorithm runs on in-vehicle PCs at approximately 15 Hz, enabling vision-only 3D scene perception for self-driving vehicles. For each synchronized set of images captured by multiple cameras, we first compute a depth map for a reference camera using plane-sweeping stereo. To maintain both accuracy and efficiency, while accounting for the fact that fisheye images have a lower angular resolution, we recover the depths using multiple image resolutions. We adopt the fast object detection framework, YOLOv3, to remove potentially dynamic objects. At the end of the pipeline, we fuse the fisheye depth images into the truncated signed distance function (TSDF) volume to obtain a 3D map. We evaluate our method on large-scale urban datasets, and results show that our method works well in complex dynamic environments.
  •  
2.
  • Geppert, Marcel, et al. (författare)
  • Efficient 2D-3D Matching for Multi-Camera Visual Localization
  • 2019
  • Ingår i: Proceedings - IEEE International Conference on Robotics and Automation. - 1050-4729. ; 2019-May, s. 5972-5978
  • Konferensbidrag (refereegranskat)abstract
    • Visual localization, i.e., determining the position and orientation of a vehicle with respect to a map, is a key problem in autonomous driving. We present a multi-camera visual inertial localization algorithm for large scale environments. To efficiently and effectively match features against a pre-built global 3D map, we propose a prioritized feature matching scheme for multi-camera systems. In contrast to existing works, designed for monocular cameras, we (1) tailor the prioritization function to the multi-camera setup and (2) run feature matching and pose estimation in parallel. This significantly accelerates the matching and pose estimation stages and allows us to dynamically adapt the matching efforts based on the surrounding environment. In addition, we show how pose priors can be integrated into the localization system to increase efficiency and robustness. Finally, we extend our algorithm by fusing the absolute pose estimates with motion estimates from a multi-camera visual inertial odometry pipeline (VIO). This results in a system that provides reliable and drift-less pose estimation. Extensive experiments show that our localization runs fast and robust under varying conditions, and that our extended algorithm enables reliable real-time pose estimation.
  •  
3.
  • Heng, Lionel, et al. (författare)
  • Project AutoVision: Localization and 3D Scene Perception for an Autonomous Vehicle with a Multi-Camera System
  • 2019
  • Ingår i: Proceedings - IEEE International Conference on Robotics and Automation. - 1050-4729. - 9781538660263 ; 2019-May, s. 4695-4702
  • Konferensbidrag (refereegranskat)abstract
    • Project AutoVision aims to develop localization and 3D scene perception capabilities for a self-driving vehicle. Such capabilities will enable autonomous navigation in urban and rural environments, in day and night, and with cameras as the only exteroceptive sensors. The sensor suite employs many cameras for both 360-degree coverage and accurate multi-view stereo; the use of low-cost cameras keeps the cost of this sensor suite to a minimum. In addition, the project seeks to extend the operating envelope to include GNSS-less conditions which are typical for environments with tall buildings, foliage, and tunnels. Emphasis is placed on leveraging multi-view geometry and deep learning to enable the vehicle to localize and perceive in 3D space. This paper presents an overview of the project, and describes the sensor suite and current progress in the areas of calibration, localization, and perception.
  •  
4.
  • Xu, Caihua, et al. (författare)
  • WT1 promotes cell proliferation in non-small cell lung cancer cell lines through up-regulating cyclin D1 and p-pRb in vitro and in vivo
  • 2013
  • Ingår i: PLOS ONE. - San Francisco : PLoS, Public Library of Science. - 1932-6203. ; 8:8
  • Tidskriftsartikel (refereegranskat)abstract
    • The Wilms' tumor suppressor gene (WT1) has been identified as an oncogene in many malignant diseases such as leukaemia, breast cancer, mesothelioma and lung cancer. However, the role of WT1 in non-small-cell lung cancer (NSCLC) carcinogenesis remains unclear. In this study, we compared WT1 mRNA levels in NSCLC tissues with paired corresponding adjacent tissues and identified significantly higher expression in NSCLC specimens. Cell proliferation of three NSCLC cell lines positively correlated with WT1 expression; moreover, these associations were identified in both cell lines and a xenograft mouse model. Furthermore, we demonstrated that up-regulation of Cyclin D1 and the phosphorylated retinoblastoma protein (p-pRb) was mechanistically related to WT1 accelerating cells to S-phase. In conclusion, our findings demonstrated that WT1 is an oncogene and promotes NSCLC cell proliferation by up-regulating Cyclin D1 and p-pRb expression.
  •  
5.
  • Zhu, Zihan, et al. (författare)
  • NICE-SLAM: Neural Implicit Scalable Encoding for SLAM
  • 2022
  • Ingår i: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. - 9781665469463 - 9781665469470 ; , s. 12786-12796
  • Konferensbidrag (refereegranskat)abstract
    • Neural implicit representations have recently shown encouraging results in various domains, including promising progress in simultaneous localization and mapping (SLAM). Nevertheless, existing methods produce over- smoothed scene reconstructions and have difficulty scaling up to large scenes. These limitations are mainly due to their simple fully-connected network architecture that does not incorporate local information in the observations. In this paper, we present NICE-SLAM, a dense SLAM system that incorporates multi-level local information by introducing a hierarchical scene representation. Optimizing this representation with pre-trained geometric priors enables detailed reconstruction on large indoor scenes. Compared to recent neural implicit SLAM systems, our approach is more scalable, efficient, and robust. Experiments on five challenging datasets demonstrate competitive results of NICE-SLAM in both mapping and tracking quality.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-5 av 5

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy