SwePub
Sök i LIBRIS databas

  Extended search

id:"swepub:oai:research.chalmers.se:2f20480c-9595-435e-9243-7eaec9945f80"
 

Search: id:"swepub:oai:research.chalmers.se:2f20480c-9595-435e-9243-7eaec9945f80" > LIDAR-Camera Fusion...

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

LIDAR-Camera Fusion for Road Detection Using Fully Convolutional Neural Networks

Caltagirone, Luca, 1983 (author)
Chalmers tekniska högskola,Chalmers University of Technology
Bellone, Mauro, 1982 (author)
Chalmers tekniska högskola,Chalmers University of Technology
Svensson, Lennart, 1976 (author)
Chalmers tekniska högskola,Chalmers University of Technology
show more...
Wahde, Mattias, 1969 (author)
Chalmers tekniska högskola,Chalmers University of Technology
show less...
 (creator_code:org_t)
Elsevier BV, 2019
2019
English.
In: Robotics and Autonomous Systems. - : Elsevier BV. - 0921-8890. ; 111, s. 125-131
  • Journal article (peer-reviewed)
Abstract Subject headings
Close  
  • In this work, a deep learning approach has been developed to carry out road detection by fusing LIDAR point clouds and camera images. An unstructured and sparse point cloud is first projected onto the camera image plane and then upsampled to obtain a set of dense 2D images encoding spatial information. Several fully convolutional neural networks (FCNs) are then trained to carry out road detection, either by using data from a single sensor, or by using three fusion strategies: early, late, and the newly proposed cross fusion. Whereas in the former two fusion approaches, the integration of multimodal information is carried out at a predefined depth level, the cross fusion FCN is designed to directly learn from data where to integrate information; this is accomplished by using trainable cross connections between the LIDAR and the camera processing branches.  To further highlight the benefits of using a multimodal system for road detection, a data set consisting of visually challenging scenes was extracted from driving sequences of the KITTI raw data set. It was then demonstrated that, as expected, a purely camera-based FCN severely underperforms on this data set. A multimodal system, on the other hand, is still able to provide high accuracy. Finally, the proposed cross fusion FCN was evaluated on the KITTI road benchmark where it achieved excellent performance, with a MaxF score of 96.03%, ranking it among the top-performing approaches.

Subject headings

NATURVETENSKAP  -- Data- och informationsvetenskap -- Annan data- och informationsvetenskap (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Other Computer and Information Science (hsv//eng)
TEKNIK OCH TEKNOLOGIER  -- Annan teknik -- Mediateknik (hsv//swe)
ENGINEERING AND TECHNOLOGY  -- Other Engineering and Technologies -- Media Engineering (hsv//eng)
NATURVETENSKAP  -- Data- och informationsvetenskap -- Datorseende och robotik (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Computer Vision and Robotics (hsv//eng)

Keyword

fully convolutional neural network
autonomous driving

Publication and Content Type

art (subject category)
ref (subject category)

Find in a library

To the university's database

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view