SwePub
Sök i SwePub databas

  Utökad sökning

Booleska operatorer måste skrivas med VERSALER

Träfflista för sökning "AMNE:(NATURAL SCIENCES Computer and Information Sciences Computer Vision and Robotics Autonomous Systems) ;lar1:(ltu)"

Sökning: AMNE:(NATURAL SCIENCES Computer and Information Sciences Computer Vision and Robotics Autonomous Systems) > Luleå tekniska universitet

  • Resultat 1-6 av 6
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Estrela, Vania V., et al. (författare)
  • Conclusions
  • 2020
  • Ingår i: Imaging and Sensing for Unmanned Aircraft Systems Volume 2. - : Institution of Engineering and Technology. - 9781785616440 - 9781785616457 ; , s. 247-248
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • The current awareness in unmanned aerial vehicles (UAVs) has prompted not only military applications but also civilian uses. Aerial vehicles’ requirements aspire to guarantee a higher level of safety comparable to see-and-avoid conditions for piloted aeroplanes. The process of probing obstacles in the path of a vehicle and determining whether they pose a threat, alongside measures to avoid these issues, is known as see and avoid or sense and avoid. Other types of decision-making tasks can be accomplished using computer vision and sensor integration since they have a great potential to improve the performance of the UAVs. Macroscopically, UAVs are cyber-physical systems (CPSs) that can benefit from all types of sensing frameworks, despite severe design constraints, such as precision, reliable communication, distributed processing capabilities and data management. This book is paying attention to several issues that are still under discussions in the field of UAV-CPSs. Thus, several trends and needs are discussed to foster criticism from the readers and to provide further food for thought.
  •  
2.
  • Lindqvist, Bjorn, et al. (författare)
  • A Tree-based Next-best-trajectory Method for 3D UAV Exploration
  • 2024
  • Ingår i: IEEE Transactions on robotics. - : IEEE. - 1552-3098 .- 1941-0468. ; 40, s. 3496-3513
  • Tidskriftsartikel (refereegranskat)abstract
    • This work presents a fully integrated tree-based combined exploration-planning algorithm: exploration-rapidly-exploring random trees (RRT) (ERRT). The algorithm is focused on providing real-time solutions for local exploration in a fully unknown and unstructured environment while directly incorporating exploratory behavior, robot-safe path planning, and robot actuation into the central problem. ERRT provides a complete sampling and tree-based solution for evaluating “where to go next” by considering a tradeoff between maximizing information gain and minimizing the distances traveled and the robot actuation along the path. The complete scheme is evaluated in extensive simulations, comparisons, and real-world field experiments in constrained and narrow subterranean and GPS-denied environments. The framework is fully robot operating system (ROS) integrated and straightforward to use.
  •  
3.
  • Imaging and sensing for unmanned aircraft systems Volume 2: Deployment and applications
  • 2020
  • Samlingsverk (redaktörskap) (övrigt vetenskapligt/konstnärligt)abstract
    • This two volume book set explores how sensors and computer vision technologies are used for the navigation, control, stability, reliability, guidance, fault detection, self-maintenance, strategic re-planning and reconfiguration of unmanned aircraft systems (UAS). Volume 1 concentrates on UAS control and performance methodologies including Computer Vision and Data Storage, Integrated Optical Flow for Detection and Avoidance Systems, Navigation and Intelligence, Modeling and Simulation, Multisensor Data Fusion, Vision in Micro-Aerial Vehicles (MAVs), Computer Vision in UAV using ROS, Security Aspects of UAV and Robot Operating System, Vision in Indoor and Outdoor Drones, Sensors and Computer Vision, and Small UAVP for Persistent Surveillance. Volume 2 focuses on UAS deployment and applications including UAV-CPSs as a Testbed for New Technologies and a Primer to Industry 5.0, Human-Machine Interface Design, Open Source Software (OSS) and Hardware (OSH), Image Transmission in MIMO-OSTBC System, Image Database, Communications Requirements, Video Streaming, and Communications Links, Multispectral vs Hyperspectral Imaging, Aerial Imaging and Reconstruction of Infrastructures, Deep Learning as an Alternative to Super Resolution Imaging, and Quality of Experience (QoE) and Quality of Service (QoS).
  •  
4.
  • Estrela, Vania V., et al. (författare)
  • Conclusions
  • 2020
  • Ingår i: Imaging and Sensing for Unmanned Aircraft Systems Volume 1. - : Institution of Engineering and Technology. - 9781785616426 - 9781785616433 ; , s. 333-335
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • The current awareness in UAVs has prompted not only military applications but also civilian uses. Aerial vehicles’ requirements aspire to guarantee a higher level of safety comparable to see-and-avoid conditions for piloted aeroplanes. The process of probing obstacles in the path of a vehicle, and to determine if they pose a threat, alongside measures to avoid problems, is known as see-and-avoid or sense and-avoid involves a great deal of decision-making. Other types of decisionmaking tasks can be accomplished using computer vision and sensor integration since they have great potential to improve the performance of UAVs. Macroscopically, Unmanned Aerial Systems (UASs) are cyber-physical systems (CPSs) that can benefit from all types of sensing frameworks, despite severe design constraints such as precision, reliable communication, distributed processing capabilities, and data management.
  •  
5.
  • Saucedo, Mario A. V., et al. (författare)
  • EAT: Environment Agnostic Traversability for reactive navigation
  • 2024
  • Ingår i: Expert systems with applications. - : Elsevier Ltd. - 0957-4174 .- 1873-6793. ; 244
  • Tidskriftsartikel (refereegranskat)abstract
    • This work presents EAT (Environment Agnostic Traversability for Reactive Navigation) a novel framework for traversability estimation in indoor, outdoor, subterranean (SubT) and other unstructured environments. The architecture provides updates on traversable regions online during the mission, adapts to varying environments, while being robust to noisy semantic image segmentation. The proposed framework considers terrain prioritization based on a novel decay exponential function to fuse the semantic information and geometric features extracted from RGB-D images to obtain the traversability of the scene. Moreover, EAT introduces an obstacle inflation mechanism on the traversability image, based on mean-window weighting module, allowing to adapt the proximity to untraversable regions. The overall architecture uses two LRASPP MobileNet V3 large Convolutional Neural Networks (CNN) for semantic segmentation over RGB images, where the first one classifies the terrain types and the second one classifies see-through obstacles in the scene. Additionally, the geometric features profile the underlying surface properties of the local scene, extracting normals from depth images. The proposed scheme was integrated with a control architecture in reactive navigation scenarios and was experimentally validated in indoor and outdoor environments as well as in subterranean environments with a Pioneer 3AT mobile robot.
  •  
6.
  • Borngrund, Carl, 1992-, et al. (författare)
  • Semi-Automatic Video Frame Annotation for Construction Equipment Automation Using Scale-Models
  • 2021
  • Ingår i: IECON 2021 – 47th Annual Conference of the IEEE Industrial Electronics Society. - : IEEE.
  • Konferensbidrag (refereegranskat)abstract
    • Data collection and annotation is a time consuming and costly process, yet necessary for machine vision. Automation of construction equipment relies on seeing and detecting different objects in the vehicle’s surroundings. Construction equipment is commonly used to perform frequent repetitive tasks, which are interesting to automate. An example of such a task is the short-loading cycle, where the material is moved from a pile into the tipping body of a dump truck for transport. To complete this task, the wheel loader needs to have the capability to locate the tipping body of the dump truck. The machine vision system also allows the vehicle to detect unforeseen dangers such as other vehicles and more importantly human workers. In this work, we investigate the viability to perform semi-automatic annotation of video data using linear interpolation. The data is collected using scale-models mimicking a wheel-loaders approach towards a dump truck during the short-loading cycle. To measure the viability of this type of solution, the workload is compared to the accuracy of the model, YOLOv3. The results indicate that it is possible to maintain the performance while decreasing the annotation workload by about 95%. This is an interesting result for this application domain, as safety is critical and retaining the vision system performance is more important than decreasing the annotation workload. The fact that the performance seems to retain with a large workload decrease is an encouraging sign.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-6 av 6

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy