SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Hornauer Sascha) "

Search: WFRF:(Hornauer Sascha)

  • Result 1-3 of 3
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Hornauer, Sascha, et al. (author)
  • Driving scene retrieval by example from large-scale data
  • 2019
  • In: CVPR Workshops 2019.
  • Conference paper (peer-reviewed)abstract
    • Many machine learning approaches train networks with input from large datasets to reach high task performance. Collected datasets, such as Berkeley Deep Drive Video (BDD-V) for autonomous driving, contain a large variety of scenes and hence features. However, depending on the task, subsets, containing certain features more densely, support training better than others. For example, training networks on tasks such as image segmentation, bounding box detection or tracking requires an ample amount of objects in the input data. When training a network to perform optical flow estimation from first-person video, over-proportionally many straight driving scenes in the training data may lower generalization to turns. Even though some scenes of the BDD-V dataset are labeled with scene, weather or time of day information, these may be too coarse to filter the dataset best for a particular training task. Furthermore, even defining an exhaustive list of good label-types is complicated as it requires choosing the most relevant concepts of the natural world for a task. Alternatively, we investigate how to use examples of desired data to retrieve more similar data from a large-scale dataset. Following the paradigm of ”I know it when I see it”, we present a deep learning approach to use driving examples for retrieving similar scenes from the BDD-V dataset. Our method leverages only automatically collected labels. We show how we can reliably vary time of the day or objects in our query examples and retrieve nearest neighbors from the dataset. Using this method, already collected data can be filtered to remove bias from a dataset, removing scenes regarded too redundant to train on.
  •  
2.
  • Ranjbar, Arian, 1992, et al. (author)
  • Safety Monitoring of Neural Networks Using Unsupervised Feature Learning and Novelty Estimation
  • 2022
  • In: IEEE Transactions on Intelligent Vehicles. - 2379-8858. ; 7:3, s. 711-721
  • Journal article (peer-reviewed)abstract
    • Neural networks are currently suggested to be implemented in several different driving functions of autonomous vehicles. While showing promising results the drawback lies in the difficulty of safety verification and ensuring operation as intended. The aim of this paper is to increase safety when using neural networks, by proposing a monitoring framework based on novelty estimation of incoming driving data. The idea is to use unsupervised instance discrimination to learn a similarity measure across ego-vehicle camera images. By estimating a von Mises-Fisher distribution of expected ego-camera images they can be compared with unexpected novel images. A novelty measurement is inferred through the likelihood of test frames belonging to the expected distribution. The suggested method provides competitive results to several other novelty or anomaly detection algorithms on the CIFAR-10 and CIFAR-100 datasets. It also shows promising results on real world driving scenarios by distinguishing novel driving scenes from the training data of BDD100k. Applied on the identical training-test data split, the method is also able to predict the performance profile of a segmentation network. Finally, examples are provided on how this method can be extended to find novel segments in images.
  •  
3.
  • Ranjbar, Arian, 1992, et al. (author)
  • Scene Novelty Prediction from Unsupervised Discriminative Feature Learning
  • 2020
  • In: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC).
  • Conference paper (peer-reviewed)abstract
    • Deep learning approaches are widely explored in safety-critical autonomous driving systems on various tasks. Network models, trained on big data, map input to probable prediction results. However, it is unclear how to get a measure of confidence on this prediction at the test time. Our approach to gain this additional information is to estimate how similar test data is to the training data that the model was trained on. We map training instances onto a feature space that is the most discriminative among them. We then model the entire training set as a Gaussian distribution in that feature space. The novelty of the test data is characterized by its low probability of being in that distribution, or equivalently a large Mahalanobis distance in the feature space. Our distance metric in the discriminative feature space achieves a better novelty prediction performance than the state-of-the-art methods on most classes in CIFAR-10 and ImageNet. Using semantic segmentation as a proxy task often needed for autonomous driving, we show that our unsupervised novelty prediction correlates with the performance of a segmentation network trained on full pixel-wise annotations. These experimental results demonstrate potential applications of our method upon identifying scene familiarity and quantifying the confidence in autonomous driving actions.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-3 of 3

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view