1. |
- Aksoy, Eren, 1982-, et al.
(author)
-
SalsaNet : Fast Road and Vehicle Segmentationin LiDAR Point Clouds for Autonomous Driving
- 2020
-
In: IEEE Intelligent Vehicles Symposium. - Piscataway, N.J. : IEEE. ; , s. 926-932
-
Conference paper (peer-reviewed)abstract
- In this paper, we introduce a deep encoder-decoder network, named SalsaNet, for efficient semantic segmentation of 3D LiDAR point clouds. SalsaNet segments the road, i.e. drivable free-space, and vehicles in the scene by employing the Bird-Eye-View (BEV) image projection of the point cloud. To overcome the lack of annotated point cloud data, in particular for the road segments, we introduce an auto-labeling process which transfers automatically generated labels from the camera to LiDAR. We also explore the role of imagelike projection of LiDAR data in semantic segmentation by comparing BEV with spherical-front-view projection and show that SalsaNet is projection-agnostic. We perform quantitative and qualitative evaluations on the KITTI dataset, which demonstrate that the proposed SalsaNet outperforms other state-of-the-art semantic segmentation networks in terms of accuracy and computation time. Our code and data are publicly available at https://gitlab.com/aksoyeren/salsanet.git.
|
|
2. |
- Cooney, Martin, 1980-, et al.
(author)
-
Exercising with an “Iron Man” : Design for a Robot Exercise Coach for Persons with Dementia
- 2020
-
In: 29th IEEE International Conference on Robot and Human Interactive Communication. - Piscataway : Institute of Electrical and Electronics Engineers (IEEE). - 9781728160757 - 9781728160764 ; , s. 899-905
-
Conference paper (peer-reviewed)abstract
- Socially assistive robots are increasingly being designed to interact with humans in various therapeutical scenarios. We believe that one useful scenario is providing exercise coaching for Persons with Dementia (PWD), which involves unique challenges related to memory and communication. We present a design for a robot that can seek to help a PWD to conduct exercises by recognizing their behaviors and providing appropriate feedback, in an online, multimodal, and engaging way. Additionally, following a mid-fidelity prototyping approach, we report on some observations from an exploratory user study using a Baxter robot; although limited by the sample size and our simplified approach, the results suggested the usefulness of the general scenario, and that the degree to which a robot provides feedback–occasional or continuous– could moderate impressions of attentiveness or fun. Some possibilities for future improvement are outlined, touching on richer recognition and behavior generation strategies based on deep learning and haptic feedback, toward informing next designs. © 2020 IEEE.
|
|