SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Panahi Ashkan 1986) srt2:(2015-2019)"

Sökning: WFRF:(Panahi Ashkan 1986) > (2015-2019)

  • Resultat 1-10 av 17
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  • Huang, Yuming, et al. (författare)
  • Fusion of Community Structures in Multiplex Networks by Label Constraints
  • 2018
  • Ingår i: 26th European Signal Processing Conference (EUSIPCO). ; , s. 887-891
  • Konferensbidrag (refereegranskat)abstract
    • We develop a Belief Propagation algorithm for community detection problem in multiplex networks, which more accurately represents many real-world systems. Previous works have established that real world multiplex networks exhibit redundant structures/communities, and that community detection performance improves by aggregating (fusing) redundant layers which are generated from the same Stochastic Block Model (SBM). We introduce a probability model for generic multiplex networks, aiming to fuse community structure across layers, without assuming or seeking the same SBM generative model for different layers. Numerical experiment shows that our model finds out consistent communities between layers and yields a significant detectability improvement over the single layer architecture. Our model also achieves a comparable performance to a reference model where we assume consistent communities in prior. Finally we compare our method with multilayer modularity optimization in heterogeneous networks, and show that our method detects correct community labels more reliably.
  •  
3.
  • Mahdizadehaghdam, Shahin, et al. (författare)
  • Deep Dictionary Learning: A PARametric NETwork Approach
  • 2019
  • Ingår i: IEEE Transactions on Image Processing. - 1941-0042 .- 1057-7149. ; 28:10, s. 4790-4802
  • Tidskriftsartikel (refereegranskat)abstract
    • Deep dictionary learning seeks multiple dictionaries at different image scales to capture complementary coherent characteristics. We propose a method for learning a hierarchy of synthesis dictionaries with an image classification goal. The dictionaries and classification parameters are trained by a classification objective, and the sparse features are extracted by reducing a reconstruction loss in each layer. The reconstruction objectives in some sense regularize the classification problem and inject source signal information in the extracted features. The performance of the proposed hierarchical method increases by adding more layers, which consequently makes this model easier to tune and adapt. The proposed algorithm furthermore shows a remarkably lower fooling rate in the presence of adversarial perturbation. The validation of the proposed approach is based on its classification performance using four benchmark datasets and is compared to a Convolutional Neural Network (CNN) of similar size.
  •  
4.
  • Panahi, Ashkan, 1986, et al. (författare)
  • A numerical implementation of gridless compressed sensing
  • 2015
  • Ingår i: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. - 1520-6149. - 9781467369978 ; 2015-August, s. 3342-3346
  • Konferensbidrag (refereegranskat)abstract
    • Atomic norm denoising has been recently introduced as a generalization of the Least Absolute Shrinkage and Selection Operator (LASSO) to overcome the problem of off-grid parameters. The method has been found to possess many interesting theoretical properties. However, its implementation has been only discussed in a special case of spectral line estimation by uniform sampling. In this paper, we propose a general numerical method to solve the atomic norm denoising problem. The complexity of the proposed algorithm is proportional to the complexity of a single-parameter search in the parameter space and thus in many interesting cases, including frequency estimation it enjoys fast realization.
  •  
5.
  • Panahi, Ashkan, 1986, et al. (författare)
  • A Universal Analysis of Large-Scale Regularized Least Squares Solutions
  • 2017
  • Ingår i: Advances in Neural Information Processing Systems. - 1049-5258. ; , s. 3382-3391
  • Konferensbidrag (refereegranskat)abstract
    • A problem that has been of recent interest in statistical inference, machine learning and signal processing is that of understanding the asymptotic behavior of regularized least squares solutions under random measurement matrices (or dictionaries). The Least Absolute Shrinkage and Selection Operator (LASSO or least-squares with \ell_1 regularization) is perhaps one of the most interesting examples. Precise expressions for the asymptotic performance of LASSO have been obtained for a number of different cases, in particular when the elements of the dictionary matrix are sampled independently from a Gaussian distribution. It has also been empirically observed that the resulting expressions remain valid when the entries of the dictionary matrix are independently sampled from certain non-Gaussian distributions. In this paper, we confirm these observations theoretically when the distribution is sub-Gaussian. We further generalize the previous expressions for a broader family of regularization functions and under milder conditions on the underlying random, possibly non-Gaussian, dictionary matrix. In particular, we establish the universality of the asymptotic statistics (e.g., the average quadratic risk) of LASSO with non-Gaussian dictionaries.
  •  
6.
  • Panahi, Ashkan, 1986, et al. (författare)
  • Clustering by Sum of Norms: Stochastic Incremental Algorithm, Convergence and Cluster Recovery
  • 2017
  • Ingår i: Proceedings of Machine Learning Research. - 9781510855144 ; 6, s. 4247-4260
  • Konferensbidrag (refereegranskat)abstract
    • Standard clustering methods such as K-means, Gaussian mixture models, and hierarchical clustering, arc beset by local minima, which are sometimes drastically suboptimal. Moreover the number of clusters K must be known in advance. The recently introduced sum-of-norms (SON) or Clusterpath convex relaxation of k-means and hierarchical clustering shrinks cluster centroids toward one another and ensure a unique global minimizer. We give a scalable stochastic incremental algorithm based on proximal iterations to solve the SON problem with convergence guarantees. We also show that the algorithm recovers clusters under quite general conditions which have a similar form to the unifying proximity condition introduced in the approximation algorithms community (that covers paradigm cases such as Gaussian mixtures and planted partition models). We give experimental results to confirm that our algorithm scales much better than previous methods while producing clusters of comparable quality.
  •  
7.
  • Panahi, Ashkan, 1986, et al. (författare)
  • Demystifying Deep Learning: a Geometric Approach to Iterative Projections
  • 2018
  • Ingår i: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. - 1520-6149.
  • Konferensbidrag (refereegranskat)abstract
    • Parametric approaches to Learning, such as deep learning (DL), are highly popular in nonlinear regression, in spite of their extremely difficult training with their increasing complexity (e.g. number of layers in DL). In this paper, we present an alternative semi-parametric framework which foregoes the ordinarily required feedback, by introducing the novel idea of geometric regularization. We show that certain deep learning techniques such as residual network (ResNet) architecture are closely related to our approach. Hence, our technique can be used to analyze these types of deep learning. Moreover, we present preliminary results which confirm that our approach can be easily trained to obtain complex structures.
  •  
8.
  • Panahi, Ashkan, 1986 (författare)
  • Parameter Estimation and Filtering Using Sparse Modeling
  • 2015
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Sparsity-based estimation techniques deal with the problem of retrieving a data vector from an undercomplete set of linear observations, when the data vector is known to have few nonzero elements with unknown positions. It is also known as the atomic decomposition problem, and has been carefully studied in the field of compressed sensing. Recent findings have led to a method called basis pursuit, also known as Least Absolute Shrinkage and Selection Operator (LASSO), as a numerically reliable sparsity-based approach. Although the atomic decomposition problem is generally NP-hard, it has been shown that basis pursuit may provide exact solutions under certain assumptions. This has led to an extensive study of signals with sparse representation in different domains, providing a new general insight into signal processing. This thesis further investigates the role of sparsity-based techniques, especially basis pursuit, for solving parameter estimation problems. The relation between atomic decomposition and parameter estimation problems under a so-called separable model has also led to the application of basis pursuit to these problems. Although simulation results suggest a desirable trend in the behavior of parameter estimation by basis pursuit, a satisfactory analysis is still missing. The analysis of basis pursuit has been found difficult for several reasons, also related to its implementation. The role of the regularization parameter and discretization are common issues. Moreover, the analysis of estimates with a variable order, in this case, is not reducible to multiple fixed-order analysis. In addition to implementation and analysis, the Bayesian aspects of basis pursuit and combining prior information have not been thoroughly discussed in the context of parameter estimation.In the research presented in this thesis, we provide methods to overcome the above difficulties in implementing basis pursuit for parameter estimation. In particular, the regularization parameter selection problem and the so-called off-grid effect is addressed. We develop numerically stable algorithms to avoid discretization and study homotopy-based solutions for complex-valued problems. We use our continuous estimation algorithm, as a framework to analyze the basis pursuit. Moreover, we introduce finite set based mathematical tools to perform the analysis. Finally, we study the Bayesian aspects of basis pursuit. In particular, we introduce and study a recursive Bayesian filter for tracking the sparsity pattern in a variable parameter estimation setup.
  •  
9.
  • Panahi, Ashkan, 1986, et al. (författare)
  • Performance Analysis of Sparsity-Based Parameter Estimation
  • 2017
  • Ingår i: IEEE Transactions on Signal Processing. - : Institute of Electrical and Electronics Engineers (IEEE). - 1941-0476 .- 1053-587X. ; 65:24, s. 6478-6488
  • Tidskriftsartikel (refereegranskat)abstract
    • Since the advent of the l(1) regularized least squares method (LASSO), a new line of research has emerged, which has been geared toward the application of the LASSO to parameter estimation problems. Recent years witnessed a considerable progress in this area. The notorious difficulty with discretization has been settled in the recent literature, and an entirely continuous estimation method is now available. However, an adequate analysis of this approach lacks in the current literature. This paper provides a novel analysis of the LASSO as an estimator of continuous parameters. This analysis is different from the previous ones in that our parameters of interest are associated with the support of the LASSO solution. In other words, our analysis characterizes the error in the parameterization of the support. We provide a novel framework for our analysis by studying nearly ideal sparse solutions. In this framework, we quantify the error in the high signal-to-noise ratio regime. As the result depends on the choice of the regularization parameter, our analysis also provides a new insight into the problem of selecting the regularization parameter. Without loss of generality, the results are expressed in the context of direction of arrival estimation problem.
  •  
10.
  • Panahi, Ashkan, 1986, et al. (författare)
  • Robust Subspace Clustering by Bi-Sparsity Pursuit: Guarantees and Sequential Algorithm
  • 2018
  • Ingår i: IEEE Winter Conference on Applications of Computer Vision. - 9781538648865 ; , s. 1302-1311
  • Konferensbidrag (refereegranskat)abstract
    • We consider subspace clustering under sparse noise, for which a non-convex optimization framework based on sparse data representations has been recently developed. This setup is suitable for a large variety of applications with high dimensional data, such as image processing, which is naturally decomposed into a sparse unstructured foreground and a background residing in a union of low-dimensional subspaces. In this framework, we further discuss both performance and implementation of the key optimization problem. We provide an analysis of this optimization problem demonstrating that our approach is capable of recovering linear subspaces as a local optimal solution for sufficiently large data sets and sparse noise vectors. We also propose a sequential algorithmic solution, which is particularly useful for extremely large data sets and online vision applications such as video processing.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 17

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy