SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Babaie Zadeh Massoud) "

Sökning: WFRF:(Babaie Zadeh Massoud)

  • Resultat 1-9 av 9
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ghayem, Fateme, et al. (författare)
  • Sparse Signal Recovery Using Iterative Proximal Projection
  • 2018
  • Ingår i: IEEE Transactions on Signal Processing. - : Institute of Electrical and Electronics Engineers (IEEE). - 1053-587X .- 1941-0476. ; 66:4, s. 879-894
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper is concerned with designing efficient algorithms for recovering sparse signals from noisy underdetermined measurements. More precisely, we consider minimization of a nonsmooth and nonconvex sparsity promoting function subject to an error constraint. To solve this problem, we use an alternating minimization penalty method, which ends up with an iterative proximal-projection approach. Furthermore, inspired by accelerated gradient schemes for solving convex problems, we equip the obtained algorithm with a so-called extrapolation step to boost its performance. Additionally, we prove its convergence to a critical point. Our extensive simulations on synthetic as well as real data verify that the proposed algorithm considerably outperforms some well-known and recently proposed algorithms.
  •  
2.
  • Koochakzadeh, Ali, et al. (författare)
  • Multi-antenna assisted spectrum sensing in spatially correlated noise environments
  • 2015
  • Ingår i: Signal Processing. - : Elsevier BV. - 0165-1684 .- 1872-7557. ; 108, s. 69-76
  • Tidskriftsartikel (refereegranskat)abstract
    • A significant challenge in spectrum sensing is to lessen the signal to noise ratio needed to detect the presence of primary users while the noise level may also be unknown. To meet this challenge, multi-antenna based techniques possess a greater efficiency compared to other algorithms. In a typical compact multi-antenna system, due to small interelement spacing, mutual coupling between thermal noises of adjacent receivers is significant. In this paper, unlike most of the spectrum sensing algorithms which assume spatially uncorrelated noise, the noises on the adjacent antennas can have arbitrary correlations. Also, in contrast to some other algorithms, no prior assumption is made on the temporal properties of the signals. We exploit low-rank/sparse matrix decomposition algorithms to obtain an estimate of noise and received source covariance matrices. Given these estimates, a Semi-Constant False Alarm Rate (S-CFAR) detector, in which the probability of false alarm is constant over the scaling of the noise covariance matrix, to examine the presence of primary users is proposed. In order to analyze the efficiency of our algorithm, we derive approximate probability of detection. Numerical simulations show that the proposed algorithm consistently and considerably outperforms state-of-the-art multiantenna based spectrum sensing algorithms.
  •  
3.
  • Malek Mohammadi, Mohammadreza, et al. (författare)
  • DOA estimation in partially correlated noise using low-rank/sparse matrix decomposition
  • 2014
  • Ingår i: 2014 IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM). - : IEEE Computer Society. - 9781479914814 ; , s. 373-376
  • Konferensbidrag (refereegranskat)abstract
    • We consider the problem of direction-of-arrival (DOA) estimation in unknown partially correlated noise environments where the noise covariance matrix is sparse. A sparse noise covariance matrix is a common model for a sparse array of sensors consisted of several widely separated subarrays. Since interelement spacing among sensors in a subarray is small, the noise in the subarray is in general spatially correlated, while, due to large distances between subarrays, the noise between them is uncorrelated. Consequently, the noise covariance matrix of such an array has a block diagonal structure which is indeed sparse. Moreover, in an ordinary nonsparse array, because of small distance between adjacent sensors, there is noise coupling between neighboring sensors, whereas one can assume that non-adjacent sensors have spatially uncorrelated noise which makes again the array noise covariance matrix sparse. Utilizing some recently available tools in low-rank/sparse matrix decomposition, matrix completion, and sparse representation, we propose a novel method which can resolve possibly correlated or even coherent sources in the aforementioned partly correlated noise. In particular, when the sources are uncorrelated, our approach involves solving a second-order cone programming (SOCP), and if they are correlated or coherent, one needs to solve a computationally harder convex program. We demonstrate the effectiveness of the proposed algorithm by numerical simulations and comparison to the Cramer-Rao bound (CRB).
  •  
4.
  • Malek Mohammadi, Mohammareza, et al. (författare)
  • Iterative Concave Rank Approximation for Recovering Low-Rank Matrices
  • 2014
  • Ingår i: IEEE Transactions on Signal Processing. - : IEEE Signal Processing Society. - 1053-587X .- 1941-0476. ; 62:60, s. 5213-5226
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, we propose a new algorithm for recovery of low-rank matrices from compressed linear measurements. The underlying idea of this algorithm is to closely approximate the rank function with a smooth function of singular values, and then minimize the resulting approximation subject to the linear constraints. The accuracy of the approximation is controlled via a scaling parameter δ, where a smaller δ corresponds to a more accurate fitting. The consequent optimization problem for any finite δ is nonconvex. Therefore, to decrease the risk of ending up in local minima, a series of optimizations is performed, starting with optimizing a rough approximation (a large δ) and followed by successively optimizing finer approximations of the rank with smaller δ's. To solve the optimization problem for any δ > 0, it is converted to a new program in which the cost is a function of two auxiliary positive semidefinite variables. The paper shows that this new program is concave and applies a majorize-minimize technique to solve it which, in turn, leads to a few convex optimization iterations. This optimization scheme is also equivalent to a reweighted Nuclear Norm Minimization (NNM). For any δ > 0, we derive a necessary and sufficient condition for the exact recovery which are weaker than those corresponding to NNM. On the numerical side, the proposed algorithm is compared to NNM and a reweighted NNM in solving affine rank minimization and matrix completion problems showing its considerable and consistent superiority in terms of success rate.
  •  
5.
  • Malek Mohammadi, Mohammadreza, et al. (författare)
  • Performance Guaranteesfor Schatten-p Quasi-Norm Minimization in Recovery of Low-Rank Matrices
  • 2015
  • Ingår i: Signal Processing. - : Elsevier BV. - 0165-1684 .- 1872-7557. ; 114, s. 225-230
  • Tidskriftsartikel (refereegranskat)abstract
    • We address some theoretical guarantees for Schatten-p   quasi-norm minimization (p∈(0,1]p∈(0,1]) in recovering low-rank matrices from compressed linear measurements. Firstly, using null space properties of the measurement operator, we provide a sufficient condition for exact recovery of low-rank matrices. This condition guarantees unique recovery of matrices of ranks equal or larger than what is guaranteed by nuclear norm minimization. Secondly, this sufficient condition leads to a theorem proving that all restricted isometry property (RIP) based sufficient conditions for ℓpℓp quasi-norm minimization generalize to Schatten-p quasi-norm minimization. Based on this theorem, we provide a few RIP-based recovery conditions.
  •  
6.
  • Malek Mohammadi, Mohammareza, et al. (författare)
  • Recovery of Low-Rank Matrices Under Affine Constraints via a Smoothed Rank Function
  • 2014
  • Ingår i: IEEE Transactions on Signal Processing. - : IEEE Signal Processing Society. - 1053-587X .- 1941-0476. ; 62:4, s. 981-992
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, the problem of matrix rank minimization under affine constraints is addressed. The state-of-the-art algorithms can recover matrices with a rank much less than what is sufficient for the uniqueness of the solution of this optimization problem. We propose an algorithm based on a smooth approximation of the rank function, which practically improves recovery limits on the rank of the solution. This approximation leads to a non-convex program; thus, to avoid getting trapped in local solutions, we use the following scheme. Initially, a rough approximation of the rank function subject to the affine constraints is optimized. As the algorithm proceeds, finer approximations of the rank are optimized and the solver is initialized with the solution of the previous approximation until reaching the desired accuracy. On the theoretical side, benefiting from the spherical section property, we will show that the sequence of the solutions of the approximating programs converges to the minimum rank solution. On the experimental side, it will be shown that the proposed algorithm, termed SRF standing for smoothed rank function, can recover matrices, which are unique solutions of the rank minimization problem and yet not recoverable by nuclear norm minimization. Furthermore, it will be demonstrated that, in completing partially observed matrices, the accuracy of SRF is considerably and consistently better than some famous algorithms when the number of revealed entries is close to the minimum number of parameters that uniquely represent a low-rank matrix.
  •  
7.
  • Malek-Mohammadi, Mohammadreza, et al. (författare)
  • Successive Concave Sparsity Approximation for Compressed Sensing
  • 2016
  • Ingår i: IEEE Transactions on Signal Processing. - 1053-587X .- 1941-0476. ; 64:21, s. 5657-5671
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, based on a successively accuracy-increasing approximation of the l(0) norm, we propose a new algorithm for recovery of sparse vectors from underdetermined measurements. The approximations are realized with a certain class of concave functions that aggressively induce sparsity and their closeness to the l(0) norm can be controlled. We prove that the series of the approximations asymptotically coincides with the l(1) and l(0) norms when the approximation accuracy changes from the worst fitting to the best fitting. When measurements are noise-free, an optimization scheme is proposed that leads to a number of weighted l(1) minimization programs, whereas, in the presence of noise, we propose two iterative thresholding methods that are computationally appealing. A convergence guarantee for the iterative thresholding method is provided, and, for a particular function in the class of the approximating functions, we derive the closed-form thresholding operator. We further present some theoretical analyses via the restricted isometry, null space, and spherical section properties. Our extensive numerical simulations indicate that the proposed algorithm closely follows the performance of the oracle estimator for a range of sparsity levels wider than those of the state-of-the-art algorithms.
  •  
8.
  • Malek Mohammadi, Mohammadreza, et al. (författare)
  • Upper bounds on the error of sparse vector and low-rank matrix recovery
  • 2016
  • Ingår i: Signal Processing. - : Elsevier. - 0165-1684 .- 1872-7557. ; 120, s. 249-254
  • Tidskriftsartikel (refereegranskat)abstract
    • Suppose that a solution x to an underdetermined linear system b=Ax is given. x is approximately sparse meaning that it has a few large components compared to other small entries. However, the total number of nonzero components of x is large enough to violate any condition for the uniqueness of the sparsest solution. On the other hand, if only the dominant components are considered, then it will satisfy the uniqueness conditions. One intuitively expects that x should not be far from the true sparse solution x0. It was already shown that this intuition is the case by providing upper bounds on ||x-x0|| which are functions of the magnitudes of small components of x but independent from x0. In this paper, we tighten one of the available bounds on ||x-x0|| and extend this result to the case that b is perturbed by noise. Additionally, we generalize the upper bounds to the low-rank matrix recovery problem.
  •  
9.
  • Sadeghi, Mustafa, et al. (författare)
  • L0soft : ℓ0 minimization via soft thresholding
  • 2019
  • Ingår i: Proceedings of the 27th European Signal Processing Conference (EUSIPCO).
  • Konferensbidrag (refereegranskat)abstract
    • We propose a new algorithm for finding sparse solution of a linear system of equations using I 0 minimization. The proposed algorithm relies on approximating the non-smooth I 0 (pseudo) norm with a differentiable function. Unlike other approaches, we utilize a particular definition of I 0 norm which states that the I 0 norm of a vector can be computed as the I 1 norm of its sign vector. Then, using a smooth approximation of the sign function, the problem is converted to I 1 minimization. This problem is solved via iterative proximal algorithms. Our simulations on both synthetic and real data demonstrate the promising performance of the proposed scheme.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-9 av 9

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy