SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Conradt Jörg) "

Sökning: WFRF:(Conradt Jörg)

  • Resultat 1-10 av 31
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Angelopoulos, Anastasios N., et al. (författare)
  • Event-Based Near-Eye Gaze Tracking Beyond 10,000 Hz
  • 2021
  • Ingår i: IEEE Transactions on Visualization and Computer Graphics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1077-2626 .- 1941-0506. ; 27:5, s. 2577-2586
  • Tidskriftsartikel (refereegranskat)abstract
    • The cameras in modern gaze-tracking systems suffer from fundamental bandwidth and power limitations, constraining data acquisition speed to 300 Hz realistically. This obstructs the use of mobile eye trackers to perform, e.g., low latency predictive rendering, or to study quick and subtle eye motions like microsaccades using head-mounted devices in the wild. Here, we propose a hybrid frame-event-based near-eye gaze tracking system offering update rates beyond 10,000 Hz with an accuracy that matches that of high-end desktop-mounted commercial trackers when evaluated in the same conditions. Our system, previewed in Figure 1, builds on emerging event cameras that simultaneously acquire regularly sampled frames and adaptively sampled events. We develop an online 2D pupil fitting method that updates a parametric model every one or few events. Moreover, we propose a polynomial regressor for estimating the point of gaze from the parametric pupil model in real time. Using the first event-based gaze dataset, we demonstrate that our system achieves accuracies of 0.45 degrees -1.75 degrees for fields of view from 45 degrees to 98 degrees. With this technology, we hope to enable a new generation of ultra-low-latency gaze-contingent rendering and display techniques for virtual and augmented reality.
  •  
2.
  • Chen, Guang, et al. (författare)
  • A Novel Visible Light Positioning System With Event-Based Neuromorphic Vision Sensor
  • 2020
  • Ingår i: IEEE Sensors Journal. - : Institute of Electrical and Electronics Engineers (IEEE). - 1530-437X .- 1558-1748. ; 20:17, s. 10211-10219
  • Tidskriftsartikel (refereegranskat)abstract
    • With the advanced development of image processing technology, visible light positioning (VLP) system based on image sensors has attracted more and more attention. However, as a commonly used light receiver, traditional CMOS camera has limited dynamic range and high latency, which is susceptible to various lighting and environmental factors. Moreover, high computational cost from image processing is unavoidable for most of visible light positioning systems. In our work, a novel VLP system using an event-based neuromorphic vision sensor (event camera) as the light receiver is proposed. Due to the low latency and microsecond-level temporal resolution of the event camera, our VLP system is able to identify multiple high-frequency flickering LEDs in asynchronous events simultaneously leaving out the need for data association and traditional image processing methods. A multi-LED fusion method is applied and a high positioning accuracy of 3cm is achieved when the height between LEDs and the event camera is within 1m.
  •  
3.
  • Chen, Guang, et al. (författare)
  • EDDD : Event-Based Drowsiness Driving Detection Through Facial Motion Analysis With Neuromorphic Vision Sensor
  • 2020
  • Ingår i: IEEE Sensors Journal. - : IEEE. - 1530-437X .- 1558-1748. ; 20:11, s. 6170-6181
  • Tidskriftsartikel (refereegranskat)abstract
    • Drowsiness driving is a principal factor of many fatal traffic accidents. This paper presents the first event-based drowsiness driving detection (EDDD) system by using the recently developed neuromorphic vision sensor. Compared with traditional frame-based cameras, neuromorphic vision sensors, such as Dynamic Vision Sensors (DVS), have a high dynamic range and do not acquire full images at a fixed frame rate but rather have independent pixels that output intensity changes (called events) asynchronously at the time they occur. Since events are generated by moving edges in the scene, DVS is considered as an efficient and effective detector for the drowsiness driving-related motions. Based on this unique output, this work first proposes a highly efficient method to recognize and localize the driver's eyes and mouth motions from event streams. We further design and extract event-based drowsiness-related features directly from the event streams caused by eyes and mouths motions, then the EDDD model is established based on these features. Additionally, we provide the EDDD dataset, the first public dataset dedicated to event-based drowsiness driving detection. The EDDD dataset has 260 recordings in daytime and evening with several challenging scenes such as subjects wearing glasses/sunglasses. Experiments are conducted based on this dataset and demonstrate the high efficiency and accuracy of our method under different illumination conditions. As the first investigation of the usage of DVS in drowsiness driving detection applications, we hope that this work will inspire more event-based drowsiness driving detection research.
  •  
4.
  • Chen, Guang, et al. (författare)
  • Event-Based Neuromorphic Vision for Autonomous Driving : A Paradigm Shift for Bio-Inspired Visual Sensing and Perception
  • 2020
  • Ingår i: IEEE signal processing magazine (Print). - : Institute of Electrical and Electronics Engineers (IEEE). - 1053-5888 .- 1558-0792. ; 37:4, s. 34-49
  • Tidskriftsartikel (refereegranskat)abstract
    • As a bio-inspired and emerging sensor, an event-based neuromorphic vision sensor has a different working principle compared to the standard frame-based cameras, which leads to promising properties of low energy consumption, low latency, high dynamic range (HDR), and high temporal resolution. It poses a paradigm shift to sense and perceive the environment by capturing local pixel-level light intensity changes and producing asynchronous event streams. Advanced technologies for the visual sensing system of autonomous vehicles from standard computer vision to event-based neuromorphic vision have been developed. In this tutorial-like article, a comprehensive review of the emerging technology is given. First, the course of the development of the neuromorphic vision sensor that is derived from the understanding of biological retina is introduced. The signal processing techniques for event noise processing and event data representation are then discussed. Next, the signal processing algorithms and applications for event-based neuromorphic vision in autonomous driving and various assistance systems are reviewed. Finally, challenges and future research directions are pointed out. It is expected that this article will serve as a starting point for new researchers and engineers in the autonomous driving field and provide a bird's-eye view to both neuromorphic vision and autonomous driving research communities.
  •  
5.
  • Chen, Guang, et al. (författare)
  • FLGR : Fixed Length Gists Representation Learning for RNN-HMM Hybrid-Based Neuromorphic Continuous Gesture Recognition
  • 2019
  • Ingår i: Frontiers in Neuroscience. - : FRONTIERS MEDIA SA. - 1662-4548 .- 1662-453X. ; 13
  • Tidskriftsartikel (refereegranskat)abstract
    • A neuromorphic vision sensors is a novel passive sensing modality and frameless sensors with several advantages over conventional cameras. Frame-based cameras have an average frame-rate of 30 fps, causing motion blur when capturing fast motion, e.g., hand gesture. Rather than wastefully sending entire images at a fixed frame rate, neuromorphic vision sensors only transmit the local pixel-level changes induced by the movement in a scene when they occur. This leads to advantageous characteristics, including low energy consumption, high dynamic range, a sparse event stream and low response latency. In this study, a novel representation learning method was proposed: Fixed Length Gists Representation (FLGR) learning for event-based gesture recognition. Previous methods accumulate events into video frames in a time duration (e.g., 30 ms) to make the accumulated image-level representation. However, the accumulated-frame-based representation waives the friendly event-driven paradigm of neuromorphic vision sensor. New representation are urgently needed to fill the gap in non-accumulated-frame-based representation and exploit the further capabilities of neuromorphic vision. The proposed FLGR is a sequence learned from mixture density autoencoder and preserves the nature of event-based data better. FLGR has a data format of fixed length, and it is easy to feed to sequence classifier. Moreover, an RNN-HMM hybrid was proposed to address the continuous gesture recognition problem. Recurrent neural network (RNN) was applied for FLGR sequence classification while hidden Markov model (HMM) is employed for localizing the candidate gesture and improving the result in a continuous sequence. A neuromorphic continuous hand gestures dataset (Neuro ConGD Dataset) was developed with 17 hand gestures classes for the community of the neuromorphic research. Hopefully, FLGR can inspire the study on the event-based highly efficient, high-speed, and high-dynamic-range sequence classification tasks.
  •  
6.
  • Chen, Guang, et al. (författare)
  • Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors
  • 2019
  • Ingår i: Frontiers in Neurorobotics. - : FRONTIERS MEDIA SA. - 1662-5218. ; 13
  • Tidskriftsartikel (refereegranskat)abstract
    • Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors.
  •  
7.
  • Chen, Guang, et al. (författare)
  • NeuroAED : Towards Efficient Abnormal Event Detection in Visual Surveillance With Neuromorphic Vision Sensor
  • 2021
  • Ingår i: IEEE Transactions on Information Forensics and Security. - : Institute of Electrical and Electronics Engineers (IEEE). - 1556-6013 .- 1556-6021. ; 16, s. 923-936
  • Tidskriftsartikel (refereegranskat)abstract
    • Abnormal event detection is an important task in research and industrial applications, which has received considerable attention in recent years. Existing methods usually rely on standard frame-based cameras to record the data and process them with computer vision technologies. In contrast, this paper presents a novel neuromorphic vision based abnormal event detection system. Compared to the frame-based camera, neuromorphic vision sensors, such as Dynamic Vision Sensor (DVS), do not acquire full images at a fixed frame rate but rather have independent pixels that output intensity changes (called events) asynchronously at the time they occur. Thus, it avoids the design of the encryption scheme. Since events are triggered by moving edges on the scene, DVS is a natural motion detector for the abnormal objects and automatically filters out any temporally-redundant information. Based on this unique output, we first propose a highly efficient method based on the event density to select activated event cuboids and locate the foreground. We design a novel event-based multiscale spatio-temporal descriptor to extract features from the activated event cuboids for the abnormal event detection. Additionally, we build the NeuroAED dataset, the first public dataset dedicated to abnormal event detection with neuromorphic vision sensor. The NeuroAED dataset consists of four sub-datasets: Walking, Campus, Square, and Stair dataset. Experiments are conducted based on these datasets and demonstrate the high efficiency and accuracy of our method.
  •  
8.
  • Chen, Guang, et al. (författare)
  • NeuroIV : Neuromorphic Vision Meets Intelligent Vehicle Towards Safe Driving With a New Database and Baseline Evaluations
  • 2022
  • Ingår i: IEEE transactions on intelligent transportation systems (Print). - : Institute of Electrical and Electronics Engineers (IEEE). - 1524-9050 .- 1558-0016. ; 23:2, s. 1171-1183
  • Tidskriftsartikel (refereegranskat)abstract
    • Neuromorphic vision sensors such as the Dynamic and Active-pixel Vision Sensor (DAVIS) using silicon retina are inspired by biological vision, they generate streams of asynchronous events to indicate local log-intensity brightness changes. Their properties of high temporal resolution, low-bandwidth, lightweight computation, and low-latency make them a good fit for many applications of motion perception in the intelligent vehicle. However, as a younger and smaller research field compared to classical computer vision, neuromorphic vision is rarely connected with the intelligent vehicle. For this purpose, we present three novel datasets recorded with DAVIS sensors and depth sensor for the distracted driving research and focus on driver drowsiness detection, driver gaze-zone recognition, and driver hand-gesture recognition. To facilitate the comparison with classical computer vision, we record the RGB, depth and infrared data with a depth sensor simultaneously. The total volume of this dataset has 27360 samples. To unlock the potential of neuromorphic vision on the intelligent vehicle, we utilize three popular event-encoding methods to convert asynchronous event slices to event-frames and adapt state-of-the-art convolutional architectures to extensively evaluate their performances on this dataset. Together with qualitative and quantitative results, this work provides a new database and baseline evaluations named NeuroIV in cross-cutting areas of neuromorphic vision and intelligent vehicle.
  •  
9.
  • Chen, G., et al. (författare)
  • Neuromorphic Vision-Based Fall Localization in Event Streams with Temporal-Spatial Attention Weighted Network
  • 2022
  • Ingår i: IEEE Transactions on Cybernetics. - : Institute of Electrical and Electronics Engineers (IEEE). - 2168-2267 .- 2168-2275. ; 52:9, s. 9251-9262
  • Tidskriftsartikel (refereegranskat)abstract
    • Falling down is a serious problem for health and has become one of the major etiologies of accidental death for the elderly living alone. In recent years, many efforts have been paid to fall recognition based on wearable sensors or standard vision sensors. However, the prior methods have the risk of privacy leaks, and almost all these methods are based on video clips, which cannot localize where the falls occurred in long videos. For these reasons, in this article, the bioinspired vision sensor-based falls temporal localization framework is proposed. The bioinspired vision sensors, such as dynamic and active-pixel vision sensor (DAVIS) camera applied in this work responds to pixels' brightness change, and each pixel works independently and asynchronously compared to the standard vision sensors. This property makes it have a very high dynamic range and privacy preserving. First, to better represent event data, compared with the typical constant temporal window mechanism, an adaptive temporal window conversion mechanism is developed. The temporal localization framework follows a proven proposal and classification paradigm. Second, for the high-efficient and recall proposal generation, different from the traditional sliding window scheme, the event temporal density as the actionness score is set and the 1D-watershed algorithm to generate proposals is applied. In addition, we combine the temporal and spatial attention mechanism with our feature extraction network to temporally model the falls. Finally, to evaluate the performance of our framework, 30 volunteers are recruited to join the simulated fall experiments. According to the results of experiments, our framework can realize precise falls temporal localization and achieve the state-of-the-art performance. 
  •  
10.
  • Chen, Guang, et al. (författare)
  • Neuromorphic Vision Based Multivehicle Detection and Tracking for Intelligent Transportation System
  • 2018
  • Ingår i: Journal of Advanced Transportation. - : Hindawi Limited. - 0197-6729 .- 2042-3195.
  • Tidskriftsartikel (refereegranskat)abstract
    • Neuromorphic vision sensor is a new passive sensing modality and a frameless sensor with a number of advantages over traditional cameras. Instead of wastefully sending entire images at fixed frame rate, neuromorphic vision sensor only transmits the local pixel-level changes caused by the movement in a scene at the time they occur. This results in advantageous characteristics, in terms of low energy consumption, high dynamic range, sparse event stream, and low response latency, which can be very useful in intelligent perception systems for modern intelligent transportation system (ITS) that requires efficient wireless data communication and low power embedded computing resources. In this paper, we propose the first neuromorphic vision based multivehicle detection and tracking system in ITS. The performance of the system is evaluated with a dataset recorded by a neuromorphic vision sensor mounted on a highway bridge. We performed a preliminary multivehicle tracking-by-clustering study using three classical clustering approaches and four tracking approaches. Our experiment results indicate that, by making full use of the low latency and sparse event stream, we could easily integrate an online tracking-by-clustering system running at a high frame rate, which far exceeds the real-time capabilities of traditional frame-based cameras. If the accuracy is prioritized, the tracking task can also be performed robustly at a relatively high rate with different combinations of algorithms. We also provide our dataset and evaluation approaches serving as the first neuromorphic benchmark in ITS and hopefully can motivate further research on neuromorphic vision sensors for ITS solutions.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 31

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy