SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Wadenbäck Mårten) "

Search: WFRF:(Wadenbäck Mårten)

  • Result 1-19 of 19
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Chojnacki, W., et al. (author)
  • The equivalence of two definitions of compatible homography matrices
  • 2020
  • In: Pattern Recognition Letters. - : Elsevier BV. - 0167-8655 .- 1872-7344. ; 135, s. 38-43
  • Journal article (peer-reviewed)abstract
    • In many computer vision applications, one acquires images of planar surfaces from two different vantage points. One can use a projective transformation to map pixel coordinates associated with a particular planar surface from one image to another. The transformation, called a homography, can be represented by a unique, to within a scale factor, 3 × 3 matrix. One requires a different homography matrix, scale differences apart, for each planar surface whose two images one wants to relate. However, a collection of homography matrices forms a valid set only if the matrices satisfy consistency constraints implied by the rigidity of the motion and the scene. We explore what it means for a set of homography matrices to be compatible and show that two seemingly disparate definitions are in fact equivalent. Our insight lays the theoretical foundations upon which the derivation of various sets of homography consistency constraints can proceed. © 2020 Elsevier B.V.
  •  
2.
  • Edstedt, Johan, Doktorand, et al. (author)
  • DeDoDe: Detect, Don’t Describe — Describe, Don’t Detect for Local Feature Matching
  • 2024
  • In: 2024 International Conference on 3D Vision (3DV). - : Institute of Electrical and Electronics Engineers (IEEE). - 9798350362459 - 9798350362466
  • Conference paper (peer-reviewed)abstract
    • Keypoint detection is a pivotal step in 3D reconstruction, whereby sets of (up to) K points are detected in each view of a scene. Crucially, the detected points need to be consistent between views, i.e., correspond to the same 3D point in the scene. One of the main challenges with keypoint detection is the formulation of the learning objective. Previous learning-based methods typically jointly learn descriptors with keypoints, and treat the keypoint detection as a binary classification task on mutual nearest neighbours. However, basing keypoint detection on descriptor nearest neighbours is a proxy task, which is not guaranteed to produce 3D-consistent keypoints. Furthermore, this ties the keypoints to a specific descriptor, complicating downstream usage. In this work, we instead learn keypoints directly from 3D consistency. To this end, we train the detector to detect tracks from large-scale SfM. As these points are often overly sparse, we derive a semi-supervised two-view detection objective to expand this set to a desired number of detections. To train a descriptor, we maximize the mutual nearest neighbour objective over the keypoints with a separate network. Results show that our approach, DeDoDe, achieves significant gains on multiple geometry benchmarks. Code is provided at http://github.com/Parskatt/DeDoDegithub.com/Parskatt/DeDoDe.
  •  
3.
  • Edstedt, Johan, et al. (author)
  • DeDoDe: Detect, Don't Describe - Describe, Don't Detect for Local Feature Matching
  • 2024
  • In: Proceedings - 2024 International Conference on 3D Vision, 3DV 2024. ; , s. 148-157
  • Conference paper (peer-reviewed)abstract
    • Keypoint detection is a pivotal step in 3D reconstruction, whereby sets of (up to) K points are detected in each view of a scene. Crucially, the detected points need to be consistent between views, i.e., correspond to the same 3D point in the scene. One of the main challenges with keypoint detection is the formulation of the learning objective. Previous learning-based methods typically jointly learn descriptors with keypoints, and treat the keypoint detection as a binary classification task on mutual nearest neighbours. However, basing keypoint detection on descriptor nearest neighbours is a proxy task, which is not guaranteed to produce 3D-consistent keypoints. Furthermore, this ties the keypoints to a specific descriptor, complicating downstream usage. In this work, we instead learn keypoints directly from 3D consistency. To this end, we train the detector to detect tracks from large-scale SfM. As these points are often overly sparse, we derive a semi-supervised two-view detection objective to expand this set to a desired number of detections. To train a descriptor, we maximize the mutual nearest neighbour objective over the keypoints with a separate network. Results show that our approach, DeDoDe, achieves significant gains on multiple geometry benchmarks. Code is provided at http://github.com/Parskatt/DeDoDegithub.com/Parskatt/DeDoDe.
  •  
4.
  • Edstedt, Johan, Doktorand, et al. (author)
  • DKM: Dense Kernelized Feature Matching for Geometry Estimation
  • 2023
  • In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). - : IEEE Communications Society. - 9798350301298 - 9798350301304 ; , s. 17765-17775
  • Conference paper (peer-reviewed)abstract
    • Feature matching is a challenging computer vision task that involves finding correspondences between two images of a 3D scene. In this paper we consider the dense approach instead of the more common sparse paradigm, thus striving to find all correspondences. Perhaps counter-intuitively, dense methods have previously shown inferior performance to their sparse and semi-sparse counterparts for estimation of two-view geometry. This changes with our novel dense method, which outperforms both dense and sparse methods on geometry estimation. The novelty is threefold: First, we propose a kernel regression global matcher. Secondly, we propose warp refinement through stacked feature maps and depthwise convolution kernels. Thirdly, we propose learning dense confidence through consistent depth and a balanced sampling approach for dense confidence maps. Through extensive experiments we confirm that our proposed dense method, Dense Kernelized Feature Matching, sets a new state-of-the-art on multiple geometry estimation benchmarks. In particular, we achieve an improvement on MegaDepth-1500 of +4.9 and +8.9 AUC@5° compared to the best previous sparse method and dense method respectively. Our code is provided at the following repository: https://github.com/Parskatt/DKM.
  •  
5.
  • Melnyk, Pavlo, et al. (author)
  • Embed Me If You Can: A Geometric Perceptron
  • 2021
  • In: Proceedings 2021 IEEE/CVF International Conference on Computer Vision ICCV 2021. - : Institute of Electrical and Electronics Engineers (IEEE). - 9781665428125 - 9781665428132 ; , s. 1256-1264
  • Conference paper (peer-reviewed)abstract
    • Solving geometric tasks involving point clouds by using machine learning is a challenging problem. Standard feed-forward neural networks combine linear or, if the bias parameter is included, affine layers and activation functions. Their geometric modeling is limited, which motivated the prior work introducing the multilayer hypersphere perceptron (MLHP). Its constituent part, i.e., the hypersphere neuron, is obtained by applying a conformal embedding of Euclidean space. By virtue of Clifford algebra, it can be implemented as the Cartesian dot product of inputs and weights. If the embedding is applied in a manner consistent with the dimensionality of the input space geometry, the decision surfaces of the model units become combinations of hyperspheres and make the decision-making process geometrically interpretable for humans. Our extension of the MLHP model, the multilayer geometric perceptron (MLGP), and its respective layer units, i.e., geometric neurons, are consistent with the 3D geometry and provide a geometric handle of the learned coefficients. In particular, the geometric neuron activations are isometric in 3D, which is necessary for rotation and translation equivariance. When classifying the 3D Tetris shapes, we quantitatively show that our model requires no activation function in the hidden layers other than the embedding to outperform the vanilla multilayer perceptron. In the presence of noise in the data, our model is also superior to the MLHP.
  •  
6.
  • Melnyk, Pavlo, et al. (author)
  • Steerable 3D Spherical Neurons
  • 2022
  • In: Proceedings of the 39th International Conference on Machine Learning. - : PMLR. ; , s. 15330-15339
  • Conference paper (peer-reviewed)abstract
    • Emerging from low-level vision theory, steerable filters found their counterpart in prior work on steerable convolutional neural networks equivariant to rigid transformations. In our work, we propose a steerable feed-forward learning-based approach that consists of neurons with spherical decision surfaces and operates on point clouds. Such spherical neurons are obtained by conformal embedding of Euclidean space and have recently been revisited in the context of learning representations of point sets. Focusing on 3D geometry, we exploit the isometry property of spherical neurons and derive a 3D steerability constraint. After training spherical neurons to classify point clouds in a canonical orientation, we use a tetrahedron basis to quadruplicate the neurons and construct rotation-equivariant spherical filter banks. We then apply the derived constraint to interpolate the filter bank outputs and, thus, obtain a rotation-invariant network. Finally, we use a synthetic point set and real-world 3D skeleton data to verify our theoretical findings. The code is available at https://github.com/pavlo-melnyk/steerable-3d-neurons.
  •  
7.
  • Ornhag, Marcus Valtonen, et al. (author)
  • Efficient real-time radial distortion correction for UAVs
  • 2021
  • In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). - 9781665404778 - 9780738142661 - 9781665446402 ; , s. 1750-1759
  • Conference paper (peer-reviewed)abstract
    • In this paper we present a novel algorithm for onboard radial distortion correction for unmanned aerial vehicles (UAVs) equipped with an inertial measurement unit (IMU), that runs in real-time. This approach makes calibration procedures redundant, thus allowing for exchange of optics extemporaneously. By utilizing the IMU data, the cameras can be aligned with the gravity direction. This allows us to work with fewer degrees of freedom, and opens up for further intrinsic calibration. We propose a fast and robust minimal solver for simultaneously estimating the focal length, radial distortion profile and motion parameters from homographies. The proposed solver is tested on both synthetic and real data, and perform better or on par with state-of-the-art methods relying on pre-calibration procedures. Code available at: https://github.com/marcusvaltonen/HomLib.1
  •  
8.
  • Valtonen Örnhag, Marcus, et al. (author)
  • Enforcing the General Planar Motion Model : Bundle Adjustment for Planar Scenes
  • 2020
  • In: Pattern Recognition Applications and Methods - 8th International Conference, ICPRAM 2019, Revised Selected Papers. - Cham : Springer International Publishing. - 1611-3349 .- 0302-9743. - 9783030400132 ; 11996 LNCS, s. 119-135
  • Conference paper (peer-reviewed)abstract
    • In this paper we consider the case of planar motion, where a mobile platform equipped with two cameras moves freely on a planar surface. The cameras are assumed to be directed towards the floor, as well as being connected by a rigid body motion, which constrains the relative motion of the cameras and introduces new geometric constraints. In the existing literature, there are several algorithms available to obtain planar motion compatible homographies. These methods, however, do not minimise a physically meaningful quantity, which may lead to issues when tracking the mobile platform globally. As a remedy, we propose a bundle adjustment algorithm tailored for the specific problem geometry. Due to the new constrained model, general bundle adjustment frameworks, compatible with the standard six degree of freedom model, are not directly applicable, and we propose an efficient method to reduce the computational complexity, by utilising the sparse structure of the problem. We explore the impact of different polynomial solvers on synthetic data, and highlight various trade-offs between speed and accuracy. Furthermore, on real data, the proposed method shows an improvement compared to generic methods not enforcing the general planar motion model.
  •  
9.
  • Valtonen Örnhag, Marcus, et al. (author)
  • Trust Your IMU: Consequences of Ignoring the IMU Drift
  • 2022
  • In: Proceedings 2022 IEEE/CVF Conference on Computer Visionand Pattern Recognition Workshops. - : IEEE Computer Society. - 9781665487399 - 9781665487405 ; , s. 4467-4476
  • Conference paper (peer-reviewed)abstract
    • In this paper, we argue that modern pre-integration methods for inertial measurement units (IMUs) are accurate enough to ignore the drift for short time intervals. This allows us to consider a simplified camera model, which in turn admits further intrinsic calibration. We develop the first-ever solver to jointly solve the relative pose problem with unknown and equal focal length and radial distortion profile while utilizing the IMU data. Furthermore, we show significant speed-up compared to state-of-the-art algorithms, with small or negligible loss in accuracy for partially calibrated setups.The proposed algorithms are tested on both synthetic and real data, where the latter is focused on navigation using unmanned aerial vehicles (UAVs). We evaluate the proposed solvers on different commercially available low-cost UAVs, and demonstrate that the novel assumption on IMU drift is feasible in real-life applications. The extended intrinsic auto-calibration enables us to use distorted input images, making tedious calibration processes obsolete, compared to current state-of-the-art methods. Code available at: https://github.com/marcusvaltonen/DronePoseLib.
  •  
10.
  • Wadenbäck, Mårten (author)
  • A Result for Orthogonal Plus Rank-1 Matrices
  • 2015
  • Other publication (other academic/artistic)abstract
    • In this paper the sum of an orthogonal matrix and an outer product is studied, and a relation between the norms of the vectors forming the outer product and the singular values of the resulting matrix is presented. The main result may be found in Theorem 1.
  •  
11.
  • Wadenbäck, Mårten, et al. (author)
  • Ego-Motion Recovery and Robust Tilt Estimation for Planar Motion Using Several Homographies
  • 2014
  • In: 9th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2014), Proceedings of. - : SCITEPRESS - Science and and Technology Publications. ; , s. 635-639
  • Conference paper (peer-reviewed)abstract
    • In this paper we suggest an improvement to a recent algorithm for estimating the pose and ego-motion of a camera which is constrained to planar motion at a constant height above the floor, with a constant tilt. Such motion is common in robotics applications where a camera is mounted onto a mobile platform and directed towards the floor. Due to the planar nature of the scene, images taken with such a camera will be related by a planar homography, which may be used to extract the ego-motion and camera pose. Earlier algorithms for this particular kind of motion were not concerned with determining the tilt of the camera, focusing instead on recovering only the motion. Estimating the tilt is a necessary step in order to create a rectified map for a SLAM system. Our contribution extends the aforementioned recent method, and we demonstrate that our enhanced algorithm gives more accurate estimates of the motion parameters.
  •  
12.
  • Wadenbäck, Mårten (author)
  • Homography-Based Positioning and Planar Motion Recovery
  • 2017
  • Doctoral thesis (other academic/artistic)abstract
    • Planar motion is an important and frequently occurring situation in mobile robotics applications. This thesis concerns estimation of ego-motion and pose of a single downwards oriented camera under the assumptions of planar motion and known internal camera parameters. The so called essential matrix (or its uncalibrated counterpart, the fundamental matrix) is frequently used in computer vision applications to compute a reconstruction in 3D of the camera locations and the observed scene. However, if the observed points are expected to lie on a plane - e.g. the ground plane - this makes the determination of these matrices an ill-posed problem. Instead, methods based on homographies are better suited to this situation.One section of this thesis is concerned with the extraction of the camera pose and ego-motion from such homographies. We present both a direct SVD-based method and an iterative method, which both solve this problem. The iterative method is extended to allow simultaneous determination of the camera tilt from several homographies obeying the same planar motion model. This extension improves the robustness of the original method, and it provides consistent tilt estimates for the frames that are used for the estimation. The methods are evaluated using experiments on both real and synthetic data.Another part of the thesis deals with the problem of computing the homographies from point correspondences. By using conventional homography estimation methods for this, the resulting homography is of a too general class and is not guaranteed to be compatible with the planar motion assumption. For this reason, we enforce the planar motion model at the homography estimation stage with the help of a new homography solver using a number of polynomial constraints on the entries of the homography matrix. In addition to giving a homography of the right type, this method uses only \num{2.5} point correspondences instead of the conventional four, which is good \eg{} when used in a RANSAC framework for outlier removal.
  •  
13.
  • Wadenbäck, Mårten, et al. (author)
  • Planar Motion and Hand-Eye Calibration Using Inter-Image Homographies from a Planar Scene
  • 2013
  • In: 8th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2013), Proceedings of. - : SciTePress - Science and and Technology Publications. ; , s. 164-168
  • Conference paper (peer-reviewed)abstract
    • In this paper we consider a mobile platform performing partial hand-eye calibration and Simultaneous Localisation and Mapping (SLAM) using images of the floor along with the assumptions of planar motion and constant internal camera parameters. The method used is based on a direct parametrisation of the camera motion, combined with an iterative scheme for determining the motion parameters from inter-image homographies. Experiments are carried out on both real and synthetic data. For the real data, the estimates obtained are compared to measurements by an industrial robot, which serve as ground truth. The results demonstrate that our method produces consistent estimates of the camera position and orientation. We also make some remarks about patterns of motion for which the method fails.
  •  
14.
  • Wadenbäck, Mårten (author)
  • Planar Motion and Visual Odometry: Pose Estimation from Homographies
  • 2014
  • Licentiate thesis (other academic/artistic)abstract
    • This thesis concerns ego-motion and pose estimation of a single camera under the assumptions of planar motion and constant internal camera parameters. Planar motion is common for cameras mounted onto mobile robots, particularly in indoor scenarios, as they remain at a constant height above the ground plane. In Paper A, a parametrisation of the camera motion and pose is presented, along with an iterative approach for determining the parameters. Paper B describes how to extend the method in Paper A to use more than one homography at a time in the estimation process, thereby improving the estimation accuracy and robustness. Paper C presents an alternative method for estimating the distance between camera positions that is independent of the estimated orientation of the cameras.
  •  
15.
  • Wadenbäck, Mårten, et al. (author)
  • Recovering Planar Motion from Homographies Obtained using a 2.5-Point Solver for a Polynomial System
  • 2016
  • In: IEEE International Conference on Image Processing (ICIP), 2016. - 9781467399616 ; , s. 2966-2970
  • Conference paper (peer-reviewed)abstract
    • We present a minimal solver for a special kind of homography arising in applications with planar camera motion (e.g. mobile robotics applications). Since the camera motion we consider only has five degrees of freedom, an explicit parametrisation allows us to reduce the required number of point correspondences to 2.5. Using fewer point correspondences is beneficial when used together with RANSAC, but more importantly, the proposed special solver ensures that the estimated homography is of the correct type (in contrast to the DLT, which estimates a general homography). Our method works by enforcing eleven independent polynomial constraints on the elements of this kind of homography matrix, through the framework of the action matrix method for solving polynomial equations. Some analytical investigation using symbolic software has been conducted in order to understand the properties of the polynomial system, and these results have been used to help guide our design of the solver. Additionally, we provide a direct method to recover the sought motion parameters from the homography matrix. We demonstrate that it is possible to recover both the homography and its generating parameters efficiently and accurately.
  •  
16.
  • Wadenbäck, Mårten, et al. (author)
  • Trajectory Estimation Using Relative Distances Extracted from Inter-image Homographies
  • 2014
  • In: Computer and Robot Vision (CRV), 2014 Canadian Conference on. - 9781479943388 ; , s. 232-237
  • Conference paper (peer-reviewed)abstract
    • The main idea of this paper is to use distances between camera positions to recover the trajectory of a mobile robot. We consider a mobile platform equipped with a single fixed camera using images of the floor and their associated inter-image homographies to find these distances. We show that under the assumptions that the camera is rigidly mounted with a constant tilt and travelling at a constant height above the floor, the distance between two camera positions may be expressed in terms of the condition number of the inter-image homography. Experiments are conducted on synthetic data to verify that the derived distance formula gives distances close to the true ones and is not too sensitive to noise. We also describe how the robot trajectory may be represented as a graph with edge lengths determined by the distances computed using the formula above, and present one possible method to construct this graph given some of these distances. The experiments show promising results.
  •  
17.
  • Wadenbäck, Mårten, et al. (author)
  • Visual Odometry from Two Point Correspondences and Initial Automatic Tilt Calibration
  • 2017
  • In: 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2017). - : SCITEPRESS - Science and Technology Publications. ; 6, s. 340-346
  • Conference paper (peer-reviewed)abstract
    • Ego-motion estimation is an important step towards fully autonomous mobile robots. In this paper we propose the use of an initial but automatic camera tilt calibration, which transforms the subsequent motion estimation to a 2D rigid body motion problem. This transformed problem is solved 2-optimally using RANSAC and a two-point method for rigid body motion. The method is experimentally evaluated using a camera mounted onto a mobile platform. The results are compared to measurements from a highly accurate external camera positioning system which are used as gold standard. The experiments show promising results on real data. © 2017 by SCITEPRESS - Science and Technology Publications, Lda.
  •  
18.
  • Örnhag, Marcus Valtonen, et al. (author)
  • Minimal Solvers for Indoor UAV Positioning
  • 2021
  • In: 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR). - : IEEE COMPUTER SOC. - 1051-4651. - 9781728188089 ; , s. 1136-1143
  • Conference paper (peer-reviewed)abstract
    • In this paper we consider a collection of relative pose problems which arise naturally in applications for visual indoor navigation using unmanned aerial vehicles (UAVs). We focus on cases where additional information from an onboard IMU is available and thus provides a partial extrinsic calibration through the gravitational vector. The solvers are designed for a partially calibrated camera, for a variety of realistic indoor scenarios, which makes it possible to navigate using images of the ground floor. Current state-of-the-art solvers use more general assumptions, such as using arbitrary planar structures; however, these solvers do not yield adequate reconstructions for real scenes, nor do they perform fast enough to be incorporated in real-time systems. We show that the proposed solvers enjoy better numerical stability, are faster, and require fewer point correspondences, compared to state-of-the-art approaches. These properties are vital components for robust navigation in real-time systems, and we demonstrate on both synthetic and real data that our method outperforms other solvers, and yields superior motion estimation(1).
  •  
19.
  • Örnhag, Marcus Valtonen, et al. (author)
  • Planar motion bundle adjustment
  • 2019
  • In: ICPRAM 2019 - Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods. - : SCITEPRESS - Science and Technology Publications. - 9789897583513 ; , s. 24-31
  • Conference paper (peer-reviewed)abstract
    • In this paper we consider trajectory recovery for two cameras directed towards the floor, and which are mounted rigidly on a mobile platform. Previous work for this specific problem geometry has focused on locally minimising an algebraic error between inter-image homographies to estimate the relative pose. In order to accurately track the platform globally it is necessary to refine the estimation of the camera poses and 3D locations of the feature points, which is commonly done by utilising bundle adjustment; however, existing software packages providing such methods do not take the specific problem geometry into account, and the result is a physically inconsistent solution. We develop a bundle adjustment algorithm which incorporates the planar motion constraint, and devise a scheme that utilises the sparse structure of the problem. Experiments are carried out on real data and the proposed algorithm shows an improvement compared to established generic methods.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-19 of 19

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view