SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Knoll Alois) "

Search: WFRF:(Knoll Alois)

  • Result 1-10 of 16
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Bagge Carlson, Fredrik, et al. (author)
  • Modeling and Identification of Position and Temperature Dependent Friction Phenomena without Temperature Sensing
  • 2015
  • In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). ; , s. 3045-3051
  • Conference paper (peer-reviewed)abstract
    • This paper investigates both positional dependence in systems with friction and the influence an increase in temperature has on the friction behavior. The positional dependence is modeled with a Radial Basis Function network and the temperature dependence is modeled as a first order system with the power loss due to friction as input, eliminating the need for temperature sensing. The proposed methods are evaluated in both simulations and experiments on two industrial robots with strong positional and temperature friction dependence.
  •  
2.
  • Bagge Carlson, Fredrik, et al. (author)
  • Six DOF Eye-to-Hand Calibration from 2D Measurements Using Planar Constraints
  • 2015
  • In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). ; , s. 3628-3632
  • Conference paper (peer-reviewed)abstract
    • This article presents a linear, iterative method to solve the eye-to-hand calibration problem between a wrist-mounted laser scanner and the tool flange of a robot. Measurement data are acquired from a set of non parallel planes whereafter the plane equations and desired rigid transformation matrix are found in a two-step, iterative fashion. The method is shown to handle large error in the initial estimate of the transform and results are verified in both simulations and experiments using a seam tracking laser sensor for welding applications.
  •  
3.
  • Chen, Guang, et al. (author)
  • A Novel Visible Light Positioning System With Event-Based Neuromorphic Vision Sensor
  • 2020
  • In: IEEE Sensors Journal. - : Institute of Electrical and Electronics Engineers (IEEE). - 1530-437X .- 1558-1748. ; 20:17, s. 10211-10219
  • Journal article (peer-reviewed)abstract
    • With the advanced development of image processing technology, visible light positioning (VLP) system based on image sensors has attracted more and more attention. However, as a commonly used light receiver, traditional CMOS camera has limited dynamic range and high latency, which is susceptible to various lighting and environmental factors. Moreover, high computational cost from image processing is unavoidable for most of visible light positioning systems. In our work, a novel VLP system using an event-based neuromorphic vision sensor (event camera) as the light receiver is proposed. Due to the low latency and microsecond-level temporal resolution of the event camera, our VLP system is able to identify multiple high-frequency flickering LEDs in asynchronous events simultaneously leaving out the need for data association and traditional image processing methods. A multi-LED fusion method is applied and a high positioning accuracy of 3cm is achieved when the height between LEDs and the event camera is within 1m.
  •  
4.
  • Chen, Guang, et al. (author)
  • EDDD : Event-Based Drowsiness Driving Detection Through Facial Motion Analysis With Neuromorphic Vision Sensor
  • 2020
  • In: IEEE Sensors Journal. - : IEEE. - 1530-437X .- 1558-1748. ; 20:11, s. 6170-6181
  • Journal article (peer-reviewed)abstract
    • Drowsiness driving is a principal factor of many fatal traffic accidents. This paper presents the first event-based drowsiness driving detection (EDDD) system by using the recently developed neuromorphic vision sensor. Compared with traditional frame-based cameras, neuromorphic vision sensors, such as Dynamic Vision Sensors (DVS), have a high dynamic range and do not acquire full images at a fixed frame rate but rather have independent pixels that output intensity changes (called events) asynchronously at the time they occur. Since events are generated by moving edges in the scene, DVS is considered as an efficient and effective detector for the drowsiness driving-related motions. Based on this unique output, this work first proposes a highly efficient method to recognize and localize the driver's eyes and mouth motions from event streams. We further design and extract event-based drowsiness-related features directly from the event streams caused by eyes and mouths motions, then the EDDD model is established based on these features. Additionally, we provide the EDDD dataset, the first public dataset dedicated to event-based drowsiness driving detection. The EDDD dataset has 260 recordings in daytime and evening with several challenging scenes such as subjects wearing glasses/sunglasses. Experiments are conducted based on this dataset and demonstrate the high efficiency and accuracy of our method under different illumination conditions. As the first investigation of the usage of DVS in drowsiness driving detection applications, we hope that this work will inspire more event-based drowsiness driving detection research.
  •  
5.
  • Chen, Guang, et al. (author)
  • Event-Based Neuromorphic Vision for Autonomous Driving : A Paradigm Shift for Bio-Inspired Visual Sensing and Perception
  • 2020
  • In: IEEE signal processing magazine (Print). - : Institute of Electrical and Electronics Engineers (IEEE). - 1053-5888 .- 1558-0792. ; 37:4, s. 34-49
  • Journal article (peer-reviewed)abstract
    • As a bio-inspired and emerging sensor, an event-based neuromorphic vision sensor has a different working principle compared to the standard frame-based cameras, which leads to promising properties of low energy consumption, low latency, high dynamic range (HDR), and high temporal resolution. It poses a paradigm shift to sense and perceive the environment by capturing local pixel-level light intensity changes and producing asynchronous event streams. Advanced technologies for the visual sensing system of autonomous vehicles from standard computer vision to event-based neuromorphic vision have been developed. In this tutorial-like article, a comprehensive review of the emerging technology is given. First, the course of the development of the neuromorphic vision sensor that is derived from the understanding of biological retina is introduced. The signal processing techniques for event noise processing and event data representation are then discussed. Next, the signal processing algorithms and applications for event-based neuromorphic vision in autonomous driving and various assistance systems are reviewed. Finally, challenges and future research directions are pointed out. It is expected that this article will serve as a starting point for new researchers and engineers in the autonomous driving field and provide a bird's-eye view to both neuromorphic vision and autonomous driving research communities.
  •  
6.
  • Chen, Guang, et al. (author)
  • FLGR : Fixed Length Gists Representation Learning for RNN-HMM Hybrid-Based Neuromorphic Continuous Gesture Recognition
  • 2019
  • In: Frontiers in Neuroscience. - : FRONTIERS MEDIA SA. - 1662-4548 .- 1662-453X. ; 13
  • Journal article (peer-reviewed)abstract
    • A neuromorphic vision sensors is a novel passive sensing modality and frameless sensors with several advantages over conventional cameras. Frame-based cameras have an average frame-rate of 30 fps, causing motion blur when capturing fast motion, e.g., hand gesture. Rather than wastefully sending entire images at a fixed frame rate, neuromorphic vision sensors only transmit the local pixel-level changes induced by the movement in a scene when they occur. This leads to advantageous characteristics, including low energy consumption, high dynamic range, a sparse event stream and low response latency. In this study, a novel representation learning method was proposed: Fixed Length Gists Representation (FLGR) learning for event-based gesture recognition. Previous methods accumulate events into video frames in a time duration (e.g., 30 ms) to make the accumulated image-level representation. However, the accumulated-frame-based representation waives the friendly event-driven paradigm of neuromorphic vision sensor. New representation are urgently needed to fill the gap in non-accumulated-frame-based representation and exploit the further capabilities of neuromorphic vision. The proposed FLGR is a sequence learned from mixture density autoencoder and preserves the nature of event-based data better. FLGR has a data format of fixed length, and it is easy to feed to sequence classifier. Moreover, an RNN-HMM hybrid was proposed to address the continuous gesture recognition problem. Recurrent neural network (RNN) was applied for FLGR sequence classification while hidden Markov model (HMM) is employed for localizing the candidate gesture and improving the result in a continuous sequence. A neuromorphic continuous hand gestures dataset (Neuro ConGD Dataset) was developed with 17 hand gestures classes for the community of the neuromorphic research. Hopefully, FLGR can inspire the study on the event-based highly efficient, high-speed, and high-dynamic-range sequence classification tasks.
  •  
7.
  • Chen, Guang, et al. (author)
  • Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors
  • 2019
  • In: Frontiers in Neurorobotics. - : FRONTIERS MEDIA SA. - 1662-5218. ; 13
  • Journal article (peer-reviewed)abstract
    • Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors.
  •  
8.
  • Chen, Guang, et al. (author)
  • NeuroAED : Towards Efficient Abnormal Event Detection in Visual Surveillance With Neuromorphic Vision Sensor
  • 2021
  • In: IEEE Transactions on Information Forensics and Security. - : Institute of Electrical and Electronics Engineers (IEEE). - 1556-6013 .- 1556-6021. ; 16, s. 923-936
  • Journal article (peer-reviewed)abstract
    • Abnormal event detection is an important task in research and industrial applications, which has received considerable attention in recent years. Existing methods usually rely on standard frame-based cameras to record the data and process them with computer vision technologies. In contrast, this paper presents a novel neuromorphic vision based abnormal event detection system. Compared to the frame-based camera, neuromorphic vision sensors, such as Dynamic Vision Sensor (DVS), do not acquire full images at a fixed frame rate but rather have independent pixels that output intensity changes (called events) asynchronously at the time they occur. Thus, it avoids the design of the encryption scheme. Since events are triggered by moving edges on the scene, DVS is a natural motion detector for the abnormal objects and automatically filters out any temporally-redundant information. Based on this unique output, we first propose a highly efficient method based on the event density to select activated event cuboids and locate the foreground. We design a novel event-based multiscale spatio-temporal descriptor to extract features from the activated event cuboids for the abnormal event detection. Additionally, we build the NeuroAED dataset, the first public dataset dedicated to abnormal event detection with neuromorphic vision sensor. The NeuroAED dataset consists of four sub-datasets: Walking, Campus, Square, and Stair dataset. Experiments are conducted based on these datasets and demonstrate the high efficiency and accuracy of our method.
  •  
9.
  • Chen, Guang, et al. (author)
  • NeuroIV : Neuromorphic Vision Meets Intelligent Vehicle Towards Safe Driving With a New Database and Baseline Evaluations
  • 2022
  • In: IEEE transactions on intelligent transportation systems (Print). - : Institute of Electrical and Electronics Engineers (IEEE). - 1524-9050 .- 1558-0016. ; 23:2, s. 1171-1183
  • Journal article (peer-reviewed)abstract
    • Neuromorphic vision sensors such as the Dynamic and Active-pixel Vision Sensor (DAVIS) using silicon retina are inspired by biological vision, they generate streams of asynchronous events to indicate local log-intensity brightness changes. Their properties of high temporal resolution, low-bandwidth, lightweight computation, and low-latency make them a good fit for many applications of motion perception in the intelligent vehicle. However, as a younger and smaller research field compared to classical computer vision, neuromorphic vision is rarely connected with the intelligent vehicle. For this purpose, we present three novel datasets recorded with DAVIS sensors and depth sensor for the distracted driving research and focus on driver drowsiness detection, driver gaze-zone recognition, and driver hand-gesture recognition. To facilitate the comparison with classical computer vision, we record the RGB, depth and infrared data with a depth sensor simultaneously. The total volume of this dataset has 27360 samples. To unlock the potential of neuromorphic vision on the intelligent vehicle, we utilize three popular event-encoding methods to convert asynchronous event slices to event-frames and adapt state-of-the-art convolutional architectures to extensively evaluate their performances on this dataset. Together with qualitative and quantitative results, this work provides a new database and baseline evaluations named NeuroIV in cross-cutting areas of neuromorphic vision and intelligent vehicle.
  •  
10.
  • Chen, Guang, et al. (author)
  • Neuromorphic Vision Based Multivehicle Detection and Tracking for Intelligent Transportation System
  • 2018
  • In: Journal of Advanced Transportation. - : Hindawi Limited. - 0197-6729 .- 2042-3195.
  • Journal article (peer-reviewed)abstract
    • Neuromorphic vision sensor is a new passive sensing modality and a frameless sensor with a number of advantages over traditional cameras. Instead of wastefully sending entire images at fixed frame rate, neuromorphic vision sensor only transmits the local pixel-level changes caused by the movement in a scene at the time they occur. This results in advantageous characteristics, in terms of low energy consumption, high dynamic range, sparse event stream, and low response latency, which can be very useful in intelligent perception systems for modern intelligent transportation system (ITS) that requires efficient wireless data communication and low power embedded computing resources. In this paper, we propose the first neuromorphic vision based multivehicle detection and tracking system in ITS. The performance of the system is evaluated with a dataset recorded by a neuromorphic vision sensor mounted on a highway bridge. We performed a preliminary multivehicle tracking-by-clustering study using three classical clustering approaches and four tracking approaches. Our experiment results indicate that, by making full use of the low latency and sparse event stream, we could easily integrate an online tracking-by-clustering system running at a high frame rate, which far exceeds the real-time capabilities of traditional frame-based cameras. If the accuracy is prioritized, the tracking task can also be performed robustly at a relatively high rate with different combinations of algorithms. We also provide our dataset and evaluation approaches serving as the first neuromorphic benchmark in ITS and hopefully can motivate further research on neuromorphic vision sensors for ITS solutions.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 16

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view