SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Conradt Jörg) srt2:(2020)"

Sökning: WFRF:(Conradt Jörg) > (2020)

  • Resultat 1-9 av 9
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Chen, Guang, et al. (författare)
  • A Novel Visible Light Positioning System With Event-Based Neuromorphic Vision Sensor
  • 2020
  • Ingår i: IEEE Sensors Journal. - : Institute of Electrical and Electronics Engineers (IEEE). - 1530-437X .- 1558-1748. ; 20:17, s. 10211-10219
  • Tidskriftsartikel (refereegranskat)abstract
    • With the advanced development of image processing technology, visible light positioning (VLP) system based on image sensors has attracted more and more attention. However, as a commonly used light receiver, traditional CMOS camera has limited dynamic range and high latency, which is susceptible to various lighting and environmental factors. Moreover, high computational cost from image processing is unavoidable for most of visible light positioning systems. In our work, a novel VLP system using an event-based neuromorphic vision sensor (event camera) as the light receiver is proposed. Due to the low latency and microsecond-level temporal resolution of the event camera, our VLP system is able to identify multiple high-frequency flickering LEDs in asynchronous events simultaneously leaving out the need for data association and traditional image processing methods. A multi-LED fusion method is applied and a high positioning accuracy of 3cm is achieved when the height between LEDs and the event camera is within 1m.
  •  
2.
  • Chen, Guang, et al. (författare)
  • EDDD : Event-Based Drowsiness Driving Detection Through Facial Motion Analysis With Neuromorphic Vision Sensor
  • 2020
  • Ingår i: IEEE Sensors Journal. - : IEEE. - 1530-437X .- 1558-1748. ; 20:11, s. 6170-6181
  • Tidskriftsartikel (refereegranskat)abstract
    • Drowsiness driving is a principal factor of many fatal traffic accidents. This paper presents the first event-based drowsiness driving detection (EDDD) system by using the recently developed neuromorphic vision sensor. Compared with traditional frame-based cameras, neuromorphic vision sensors, such as Dynamic Vision Sensors (DVS), have a high dynamic range and do not acquire full images at a fixed frame rate but rather have independent pixels that output intensity changes (called events) asynchronously at the time they occur. Since events are generated by moving edges in the scene, DVS is considered as an efficient and effective detector for the drowsiness driving-related motions. Based on this unique output, this work first proposes a highly efficient method to recognize and localize the driver's eyes and mouth motions from event streams. We further design and extract event-based drowsiness-related features directly from the event streams caused by eyes and mouths motions, then the EDDD model is established based on these features. Additionally, we provide the EDDD dataset, the first public dataset dedicated to event-based drowsiness driving detection. The EDDD dataset has 260 recordings in daytime and evening with several challenging scenes such as subjects wearing glasses/sunglasses. Experiments are conducted based on this dataset and demonstrate the high efficiency and accuracy of our method under different illumination conditions. As the first investigation of the usage of DVS in drowsiness driving detection applications, we hope that this work will inspire more event-based drowsiness driving detection research.
  •  
3.
  • Chen, Guang, et al. (författare)
  • Event-Based Neuromorphic Vision for Autonomous Driving : A Paradigm Shift for Bio-Inspired Visual Sensing and Perception
  • 2020
  • Ingår i: IEEE signal processing magazine (Print). - : Institute of Electrical and Electronics Engineers (IEEE). - 1053-5888 .- 1558-0792. ; 37:4, s. 34-49
  • Tidskriftsartikel (refereegranskat)abstract
    • As a bio-inspired and emerging sensor, an event-based neuromorphic vision sensor has a different working principle compared to the standard frame-based cameras, which leads to promising properties of low energy consumption, low latency, high dynamic range (HDR), and high temporal resolution. It poses a paradigm shift to sense and perceive the environment by capturing local pixel-level light intensity changes and producing asynchronous event streams. Advanced technologies for the visual sensing system of autonomous vehicles from standard computer vision to event-based neuromorphic vision have been developed. In this tutorial-like article, a comprehensive review of the emerging technology is given. First, the course of the development of the neuromorphic vision sensor that is derived from the understanding of biological retina is introduced. The signal processing techniques for event noise processing and event data representation are then discussed. Next, the signal processing algorithms and applications for event-based neuromorphic vision in autonomous driving and various assistance systems are reviewed. Finally, challenges and future research directions are pointed out. It is expected that this article will serve as a starting point for new researchers and engineers in the autonomous driving field and provide a bird's-eye view to both neuromorphic vision and autonomous driving research communities.
  •  
4.
  • Jao, C. -S, et al. (författare)
  • Zero Velocity Detector for Foot-mounted Inertial Navigation System Assisted by a Dynamic Vision Sensor
  • 2020
  • Ingår i: 2020 DGON Inertial Sensors and Systems, ISS 2020 - Proceedings. - : Institute of Electrical and Electronics Engineers Inc..
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we proposed a novel zero velocity detector, the Dynamic-Vision-Sensor (DVS)-Aided Stance Phase Optimal dEtection (SHOE) detector, for Zero-velocity-UPdaTe (ZUPT)-Aided Inertial Navigation Systems (INS) augmented by a foot-mounted event-based camera DVS128. We observed that the firing rate of the DVS consistently increased during the swing phase and decreased during the stance phase in indoor walking experiments. We experimentally determined that the optimal placement configuration for zero-velocity detection is to mount the DVS next to an Inertial Measurement Unit (IMU) and face the sensor outward. The DVS-SHOE detector was derived in a General Likelihood Ratio Test (GLRT) framework, combining statistics of the conventional SHOE detector and the DVS firing rate. This paper used two methods to evaluate the proposed DVS-SHOE detector. First, we compared the detection performances of the SHOE detector and the DVS-SHOE detector. The experimental results showed that the DVS-SHOE detector achieved a lower false alarm rate than the SHOE detector. Second, we compared the navigation performance of the ZUPT-Aided INS using the SHOE detector and the DVS detector. The experimental results showed that the Circular Error Probable (CEP) of the case using DVS-SHOE was reduced by around 25 % from 1.2 m to 0.9 m, as compared to the case of the SHOE detector.
  •  
5.
  • Mirus, Florian, et al. (författare)
  • Analyzing the Capacity of Distributed Vector Representations to Encode Spatial Information
  • 2020
  • Ingår i: 2020 International joint conference on neural networks (IJCNN), Institute of Electrical and Electronics Engineers Inc., 2020. - : IEEE.
  • Konferensbidrag (refereegranskat)abstract
    • Vector Symbolic Architectures belong to a family of related cognitive modeling approaches that encode symbols and structures in high-dimensional vectors. Similar to human subjects, whose capacity to process and store information or concepts in short-term memory is subject to numerical restrictions, the capacity of information that can be encoded in such vector representations is limited and one way of modeling the numerical restrictions to cognition. In this paper, we analyze these limits regarding information capacity of distributed representations. We focus our analysis on simple superposition and more complex, structured representations involving convolutive powers to encode spatial information. In two experiments, we find upper bounds for the number of concepts that can effectively be stored in a single vector only depending on the dimensionality of the underlying vector space.
  •  
6.
  • Mirus, F., et al. (författare)
  • Detection of abnormal driving situations using distributed representations and unsupervised learning
  • 2020
  • Ingår i: ESANN 2020 - Proceedings, 28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. - : ESANN. ; , s. 363-368
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we present an anomaly detection system employing an unsupervised learning model trained on the information encapsulated within distributed vector representations of automotive scenes. Our representations allows us to encode automotive scenes with a varying number of traffic participants in a vector of fixed length. We train a neural network autoencoder in unsupervised fashion to detect anomalies based on this representation. We demonstrate the usefulness of our approach through a quantitative analysis on two real-world data-sets.
  •  
7.
  • Mirus, F., et al. (författare)
  • The Importance of Balanced Data Sets : Analyzing a Vehicle Trajectory Prediction Model based on Neural Networks and Distributed Representations
  • 2020
  • Ingår i: Proceedings of the International Joint Conference on Neural Networks. - : Institute of Electrical and Electronics Engineers Inc..
  • Konferensbidrag (refereegranskat)abstract
    • Predicting future behavior of other traffic participants is an essential task that needs to be solved by automated vehicles and human drivers alike to achieve safe and situation-aware driving. Modern approaches to vehicles trajectory prediction typically rely on data-driven models like neural networks, in particular LSTMs (Long Short-Term Memorys), achieving promising results. However, the question of optimal composition of the underlying training data has received less attention. In this paper, we expand on previous work on vehicle trajectory prediction based on neural network models employing distributed representations to encode automotive scenes in a semantic vector substrate. We analyze the influence of variations in the training data on the performance of our prediction models. Thereby, we show that the models employing our semantic vector representation outperform the numerical model when trained on an adequate data set and thereby, that the composition of training data in vehicle trajectory prediction is crucial for successful training. We conduct our analysis on challenging real-world driving data.
  •  
8.
  • Ward-Cherrier, B., et al. (författare)
  • A miniaturised neuromorphic tactile sensor integrated with an anthropomorphic robot hand
  • 2020
  • Ingår i: IEEE International Conference on Intelligent Robots and Systems. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 9883-9889
  • Konferensbidrag (refereegranskat)abstract
    • Restoring tactile sensation is essential to enable in-hand manipulation and the smooth, natural control of upper-limb prosthetic devices. Here we present a platform to contribute to that long-term vision, combining an anthropomorphic robot hand (QB SoftHand) with a neuromorphic optical tactile sensor (neuroTac). Neuromorphic sensors aim to produce efficient, spike-based representations of information for bio-inspired processing. The development of this 5-fingered, sensorized hardware platform is validated with a customized mount allowing manual control of the hand. The platform is demonstrated to succesfully identify 4 objects from the YCB object set, and accurately discriminate between 4 directions of shear during stable grasps. This platform could lead to wide-ranging developments in the areas of haptics, prosthetics and telerobotics.
  •  
9.
  • Youssef, Ibrahim, et al. (författare)
  • A Neuro-Inspired Computational Model for a Visually Guided Robotic Lamprey Using Frame and Event Based Cameras
  • 2020
  • Ingår i: IEEE Robotics and Automation Letters. - : Institute of Electrical and Electronics Engineers (IEEE). - 2377-3766. ; 5:2, s. 2395-2402
  • Tidskriftsartikel (refereegranskat)abstract
    • The computational load associated with computer vision is often prohibitive, and limits the capacity for on-board image analysis in compact mobile robots. Replicating the kind of feature detection and neural processing that animals excel at remains a challenge in most biomimetic aquatic robots. Event-driven sensors use a biologically inspired sensing strategy to eliminate the need for complete frame capture. Systems employing event-driven cameras enjoy reduced latencies, power consumption, bandwidth, and benefit from a large dynamic range. However, to the best of our knowledge, no work has been done to evaluate the performance of these devices in underwater robotics. This work proposes a robotic lamprey design capable of supporting computer vision, and uses this system to validate a computational neuron model for driving anguilliform swimming. The robot is equipped with two different types of cameras: frame-based and event-based cameras. These were used to stimulate the neural network, yielding goal-oriented swimming. Finally, a study is conducted comparing the performance of the computational model when driven by the two different types of camera. It was observed that event-based cameras improved the accuracy of swimming trajectories and led to significant improvements in the rate at which visual inputs were processed by the network.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-9 av 9

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy