SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Ödblom Anders) "

Sökning: WFRF:(Ödblom Anders)

  • Resultat 1-10 av 14
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Andhill, Carl Johan, et al. (författare)
  • ViPCity
  • 2016
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • Today simulation studies in ViP are mainly carried out in countryside driving environments. There is a lack of city environments. This is probably due to the fact that creating and running countryside environments in some aspects are easier than creating and running city environments. Another reason might be that countryside driving is very relevant in Swedish studies.As projects, and markets, become more international the need for city simulator studies becomes more important. Many drivers around the world do most of their driving in cities.In the ViPCity project software has been developed which facilitates the generation of driving environments for city simulations on the ViP platform. The project result is a number of assets (software, file formats and 3D components) which integrates well with the ViP platform. These assets together give simulator users the possibility to design city environments in a fast and easy way. The software has been implemented and tested successfully in Scania’s truck simulator.
  •  
2.
  • Fu, Keren, 1988, et al. (författare)
  • Automatic traffic sign recognition based on saliency-enhanced features and SVMs from incrementally built dataset
  • 2014
  • Ingår i: Proceedings of the 3rd International Conference on Connected Vehicles and Expo, ICCVE 2014; Vienna; Austria; 3-7 November 2014. - 9781479967292 ; , s. 947-952
  • Konferensbidrag (refereegranskat)abstract
    • This paper proposes an automatic traffic sign recognition method based on saliency-enhanced feature and SVMs. As when human observe a traffic sign, a two-stage procedure is performed by first locating the region of sign according to its unique shape and color, and second paying attention to content inside the sign. The proposed saliency feature extraction attempts to resemble these two processing stages. We model the first stage via extracting salient regions of signs from detected bounding boxes contributed by sign detector. Salient region extraction is formed as an energy propagation process on local structured graph. The second stage is modeled by exploiting a non-linear color mapping under the guidance of the output of the first stage. As results, salient signature inside a sign is popped up and can be directly used by subsequent SVMs for classification. The proposed method is validated on Chinese traffic sign dataset that is incrementally built.
  •  
3.
  • Fu, Keren, 1988, et al. (författare)
  • Detection and Recognition of Traffic Signs from Videos using Saliency-Enhanced Features
  • 2015
  • Ingår i: Nationell konferens i transportforskning, Oct. 21-22, 2015, Karstans universitet, Sweden.. ; , s. 2-
  • Konferensbidrag (refereegranskat)abstract
    • Traffic sign recognition, including sign detection and classification, is an essential part in advanced driver assistance systems and autonomous vehicles. Traffic sign recognition (TSR), that exploits image analysis and computer vision techniques, has drawn increasing interest lately due to recently renewed efforts in vehicle safety and autonomous driving. Applications, among many others, include advanced driver assistance systems, sign inventory, intelligent autonomous driving.We propose efficient methods for detection and classification of traffic signs from automatically cropped street view images. The main novelties in the paper include:• An approach for automatic cropping of street view images from public available websites; The method detects and crops candidate traffic sign regions (bounding boxes) along the roads, from a specified route (i.e., the beginning and end points of the road), instead of conventionally using existing datasets;• An approach for generating saliency-enhanced features for the classifier. A novel method for obtaining the saliency-enhanced regions is proposed. It is based on a propagation process on enhancing sign part that attracts visual attention. Consequently, this leads to salient feature extraction. This approach overcomes the short coming in the conventional methods where features are extracted from the entire region of a detected bounding box which usually contains other clutter (or background).• A coarse-to-fine classification method that first classifies among different sign categories (e.g. categoryof forbidden, warning signs), followed by fine-classification of traffic signs within each category.The proposed methods have been tested on 2 categories of Chinese traffic signs, each containing many different signs.
  •  
4.
  • Fu, Keren, 1988, et al. (författare)
  • Geodesic Distance Transform-based Salient Region Segmentation for Automatic Traffic Sign Recognition
  • 2016
  • Ingår i: Proceedings - 2016 IEEE Intelligent Vehicles Symposium, IV 2016, Gotenburg, Sweden, 19-22 June 2016. - 9781509018215 ; 2016-August, s. 948-953
  • Konferensbidrag (refereegranskat)abstract
    • Visual-based traffic sign recognition (TSR) requiresfirst detecting and then classifying signs from capturedimages. In such a cascade system, classification accuracy is often affected by the detection results. This paper proposes a method for extracting a salient region of traffic sign within a detection window for more accurate sign representation and feature extraction, hence enhancing the performance of classification. In the proposed method, a superpixel-based distance map is firstly generated by applying a signed geodesic distance transform from a set of selected foreground and background seeds. An effective method for obtaining a final segmentation from the distancemap is then proposed by incorporating the shape constraints of signs. Using these two steps, our method is able to automatically extract salient sign regions of different shapes. The proposed method is tested and validated in a complete TSR system. Test results show that the proposed method has led to a high classification accuracy (97.11%) on a large dataset containing street images. Comparing to the same TSR system without using saliency-segmented regions, the proposed method has yielded a marked performance improvement (about 12.84%). Future work will be on extending to more traffic sign categories and comparing with other benchmark methods.
  •  
5.
  • Fu, Keren, 1988, et al. (författare)
  • Traffic Sign Recognition using Salient Region Features: A Novel Learning-based Coarse-to-Fine Scheme
  • 2015
  • Ingår i: IEEE Intelligent Vehicles Symposium, June 28-July 1, 2015, Seoul, Korea. - 9781467372664 ; 2015-August, s. 443-448
  • Konferensbidrag (refereegranskat)abstract
    • Traffic sign recognition, including sign detection and classification, is essential for advanced driver assistancesystems and autonomous vehicles. This paper introduces a novel machine learning-based sign recognition scheme. In the proposed scheme, detection and classification are realized through learning in a coarse-to-fine manner. Based on the observation that signs in the same category share some common attributes in appearance, the proposed scheme first distinguishes each individual sign category from the background in the coarse learning stage (i.e. sign detection) followed by distinguishing different sign classes within each category in the fine learning stage (i.e. sign classification). Both stages are realized throughmachine learning techniques. A complete recognition scheme is developed that is effective for simultaneously recognizing multiple categories of traffic signs. In addition, a novel saliency-based feature extraction method is proposed for sign classification. The method segments salient sign regions by leveraging the geodesic energy propagation. Compared with the conventional feature extraction, our method provides more reliable feature extraction from salient sign regions. The proposed scheme istested and validated on two categories of Chinese traffic signs from Tencent street view. Evaluations on the test dataset show reasonably good performance, with an average of 97.5% true positive and 0.3% false positive on two categories of traffic signs.
  •  
6.
  • Nilsson, Jonas, 1979, et al. (författare)
  • Bundle adjustment using single-track vehicle model
  • 2013
  • Ingår i: Proceedings - IEEE International Conference on Robotics and Automation. - 1050-4729. - 9781467356411 ; , s. 2888-2893
  • Konferensbidrag (refereegranskat)abstract
    • This paper describes a method for estimating the 6-DoF viewing parameters of a calibrated vehicle-mounted camera. Visual features are combined with standard in-vehicle sensors and a single-track vehicle motion model in a bundle adjustment framework to produce a jointly optimal viewing parameter estimate. Results show that the vehicle motion model in combination with in-vehicle sensors exhibit good accuracy in estimating planar vehicle motion. This property is preserved, when combining these information sources with vision. Furthermore, the accuracy obtained from vision-only in direction estimation is not only maintained, but in fact further improved, primarily in situations where the matched visual features are few.
  •  
7.
  • Nilsson, Jonas, 1979, et al. (författare)
  • On Worst Case Performance of Collision Avoidance Systems
  • 2010
  • Ingår i: IEEE Intelligent Vehicles Symposium, Proceedings; 2010 IEEE Intelligent Vehicles Symposium, IV 2010; La Jolla, CA; 21 June 2010 through 24 June 2010. - 9781424478668 ; , s. 1084-1091
  • Konferensbidrag (refereegranskat)abstract
    • Automotive Collision Avoidance and Mitigation(CA/CM) systems help drivers to avoid collisions throughautonomous interventions by braking or steering. If the decisionto intervene is made too early, the intervention can become anuisance to the driver and if the decision is made too late,the safety benefits of the intervention will be reduced. Decisiontiming is thus crucial for the successful operation of a CA/CMsystem. The decision to intervene is commonly taken when athreat function reaches a specific threshold. The dimensionalityof the input state space for the threat function is in general verylarge making exhaustive evaluation in real vehicles expensiveand time consuming. This paper presents a method for efficientestimation of a lower bound on CA/CM system performance,i.e. the worst case performance. The method is applied onan example system for a set of longitudinal single objectescape scenarios. Results show significant variation in worstcase decision timing across scenarios.
  •  
8.
  • Nilsson, Jonas, 1979, et al. (författare)
  • Performance Evaluation Method for Mobile Computer Vision Systems using Augmented Reality
  • 2010
  • Ingår i: IEEE Virtual Reality 2010, VR 2010; Waltham, MA; United States; 20 March 2010 through 24 March 2010. - 9781424462582 ; , s. 19-22
  • Konferensbidrag (refereegranskat)abstract
    • This paper describes a framework which uses augmented reality for evaluating the performance of mobile computer vision systems. Computer vision systems use primarily image data to interpret the surrounding world, e.g. to detect, classify and track objects. Theperformance of mobile computer vision systems acting in unknown environments is inherently difficult to evaluate since, often, obtaining ground truth data is problematic. The proposed novel framework exploits the possibility to add virtual agents into a real data sequence collected in an unknown environment, thus making it possible to efficiently create augmented data sequences, includingground truth, to be used for performance evaluation. Varying the content in the data sequence by adding different virtual agents is straightforward, making the proposed framework very flexible. The method has been implemented and tested on a pedestrian detection system used for automotive collision avoidance. Preliminaryresults show that the method has potential to replace and complement physical testing, for instance by creating collision scenarios, which are difficult to test in reality.
  •  
9.
  • Nilsson, Jonas, 1979, et al. (författare)
  • Reliable Vehicle Pose Estimation Using Vision and a Single-Track Model
  • 2014
  • Ingår i: IEEE Transactions on Intelligent Transportation Systems. - : Institute of Electrical and Electronics Engineers (IEEE). - 1524-9050 .- 1558-0016. ; 15:6, s. 2630-2643
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper examines the problem of estimating vehicle position and direction, i.e., pose, from a single vehicle-mounted camera. A drawback of pose estimation using vision only is that it fails when image information is poor. Consequently, other information sources, e. g., motion models and sensors, may be used to complement vision to improve the estimates. We propose to combine standard in-vehicle sensor data and vehicle motion models with the accuracy of local visual bundle adjustment. This means that pose estimates are optimized with regard not only to observed image features but also to a single-track vehicle model and standard in-vehicle sensors. The described method has been experimentally tested on challenging data sets at both low and high vehicle speeds as well as a data set with moving objects. The vehicle motion model in combination with in-vehicle sensors exhibit good accuracy in estimating planar vehicle motion. Results show that this property is preserved, when combining these information sources with vision. Furthermore, the accuracy obtained from vision-only in direction estimation is improved, primarily in situations in which there are few matched visual features.
  •  
10.
  • Nilsson, Jonas, 1979, et al. (författare)
  • Using Augmentation Techniques for Performance Evaluation in Automotive Safety
  • 2011
  • Ingår i: Handbook of Augmented Reality. - 9781461400639 ; , s. 631-649
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • This chapter describes a framework which uses augmentation techniques for performance evaluation of mobile computer vision systems. Computer vision systems use primarily image data to interpret the surrounding world, e.g. to detect, classify and track objects. The performance of mobile computer vision systems acting in unknown environments is inherently difficult to evaluate since, often, obtaining ground truth data is problematic. The proposed novel framework exploits the possibility to add new agents into a real data sequence collected in an unknown environment, thus making it possible to efficiently create augmented data sequences, including ground truth, to be used for performance evaluation. Varying the content in the data sequence by adding different agents or changing the behavior of an agent is straightforward, making the proposed framework very flexible. A key driver for using augmentation techniques to address computer vision performance is that the vision system output may be sensitive to the background data content. The method has been implemented and tested on a pedestrian detection system used for automotive collision avoidance. Results show that the method has potential to replace and complement physical testing, for instance by creating collision scenarios, which are difficult to test in reality, in particular in a real traffic environment.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 14

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy