SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Savas Berkant 1977 ) "

Search: WFRF:(Savas Berkant 1977 )

  • Result 1-10 of 18
Sort/group result
   
EnumerationReferenceCoverFind
1.
  •  
2.
  • Eldén, Lars, et al. (author)
  • Perturbation Theory and Optimality Conditions for the Best Multilinear Rank Approximation of a Tensor
  • 2011
  • In: SIAM Journal on Matrix Analysis and Applications. - : SIAM. - 0895-4798 .- 1095-7162. ; 32:4, s. 1422-1450
  • Journal article (peer-reviewed)abstract
    • The problem of computing the best rank-(p,q,r) approximation of a third order tensor is considered. First the problem is reformulated as a maximization problem on a product of three Grassmann manifolds. Then expressions for the gradient and the Hessian are derived in a local coordinate system at a stationary point, and conditions for a local maximum are given. A first order perturbation analysis is performed using the Grassmann manifold framework. The analysis is illustrated in a few examples, and it is shown that the perturbation theory for the singular value decomposition is a special case of the tensor theory.
  •  
3.
  •  
4.
  • Kucher, Kostiantyn, Dr. 1989-, et al. (author)
  • Visualization of Swedish News Articles: A Design Study
  • 2024
  • In: Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP '24). - : SciTePress. ; , s. 670-677
  • Conference paper (peer-reviewed)abstract
    • The amount of available text data has increased rapidly in the past years, making it difficult for many users to find relevant information. To solve this, natural language processing (NLP) and text visualization methods have been developed, however, they typically focus on English texts only, while the support for low-resource languages is limited. The aim of this design study was to implement a visualization prototype for exploring a large number of Swedish news articles (made available by industrial collaborators), including the temporal and relational data aspects. Sketches of three visual representations were designed and evaluated through user tests involving both our collaborators and end-users (journalists). Next, an NLP pipeline was designed in order to support dynamic and hierarchical topic modeling. The final part of the study resulted in an interactive visualization prototype that uses a variation of area charts to represent topic evolution. The prototype was evaluated thr ough an internal case study and user tests with two groups of participants with the background in journalism and NLP. The evaluation results reveal the participants’ preference for the representation focusing on top topics rather than the topic hierarchy, while suggestions for future work relevant for Swedish text data visualization are also provided.
  •  
5.
  •  
6.
  • Lu, Zhengdong, et al. (author)
  • Supervised Link Prediction Using Multiple Sources
  • 2010
  • In: Proceedings of the IEEE International Conference on Data Mining (ICDM). ; , s. 923-928
  • Conference paper (peer-reviewed)abstract
    • Link prediction is a fundamental problem in social network analysis and modern-day commercial applications such as Facebook and Myspace. Most existing research approaches this problem by exploring the topological structure of a social network using only one source of information. However, in many application domains, in addition to the social network of interest, there are a number of auxiliary social networks and/or derived proximity networks available. The contribution of the paper is twofold: (1) a supervised learning framework that can effectively and efficiently learn the dynamics of social networks in the presence of auxiliary networks; (2) a feature design scheme for constructing a rich variety of path-based features using multiple sources, and an effective feature selection strategy based on structured sparsity. Extensive experiments on three real-world collaboration networks show that our model can effectively learn to predict new links using multiple sources, yielding higher prediction accuracy than unsupervised and singlesource supervised models.
  •  
7.
  • Savas, Berkant, 1977- (author)
  • Algorithms in data mining : reduced rank regression and classification by tensor methods
  • 2005
  • Licentiate thesis (other academic/artistic)abstract
    • In many fields of science, engineering, and economics large amounts of data are stored and there is a need to analyze these data in order to extract information for various purposes. Data mining is a general concept involving different tools for performing this kind of analysis. The development of mathematical models and efficient algorithms is of key importance. In this thesis, which consists of three appended manuscripts, we discuss algorithms for reduced rank regression and for classification in the context of tensor theory.The first two manuscripts deal with the reduced rank regression problem, which is encountered in the field of state-space subspace system identification. More specifically the problem iswhere A and B are given matrices and we want to find X under a certain rank condition that minimizes the determinant. This problem is not properly stated since it involves implicit assumptions on A and B so that (B - XA)(B - XA)T is never singular. This deficiency of the determinant criterion is fixed by generalizing the minimization criterion to rank reduction and volume minimization of the objective matrix. The volume of a matrix is defined as the product of its nonzero singular values. We give an algorithm that solves the generalized problem and identify properties of the input and output signals causing singularity on the objective matrix.Classification problems occur in many applications. The task is to determine the label or class of an unknown object. The third appended manuscript concerns with classification of hand written digits in the context of tensors or multidimensional data arrays. Tensor theory is also an area that attracts more and more attention because of the multidimensional structure of the collected data in a various applications. Two classification algorithms are given based on the higher order singular value decomposition (HOSVD). The main algorithm makes a data reduction using HOSVD of 98%- 99% prior the construction of the class models. The models are computed as a set of orthonormal bases spanning the dominant subspaces for the different classes. An unknown digit is expressed as a linear combination of the basis vectors. The amount of computations is fairly low and the performance reasonably good, 5% in error rate.
  •  
8.
  • Savas, Berkant, 1977- (author)
  • Algorithms in data mining using matrix and tensor methods
  • 2008
  • Doctoral thesis (other academic/artistic)abstract
    • In many fields of science, engineering, and economics large amounts of data are stored and there is a need to analyze these data in order to extract information for various purposes. Data mining is a general concept involving different tools for performing this kind of analysis. The development of mathematical models and efficient algorithms is of key importance. In this thesis we discuss algorithms for the reduced rank regression problem and algorithms for the computation of the best multilinear rank approximation of tensors.The first two papers deal with the reduced rank regression problem, which is encountered in the field of state-space subspace system identification. More specifically the problem is\[\min_{\rank(X) = k} \det (B - X A)(B - X A)\tp,\]where $A$ and $B$ are given matrices and we want to find $X$ under a certain rank condition that minimizes the determinant. This problem is not properly stated since it involves implicit assumptions on $A$ and $B$ so that $(B - X A)(B - X A)\tp$ is never singular. This deficiency of the determinant criterion is fixed by generalizing the minimization criterion to rank reduction and volume minimization of the objective matrix. The volume of a matrix is defined as the product of its nonzero singular values. We give an algorithm that solves the generalized problem and identify properties of the input and output signals causing a singular objective matrix.Classification problems occur in many applications. The task is to determine the label or class of an unknown object. The third paper concerns with classification of handwritten digits in the context of tensors or multidimensional data arrays. Tensor and multilinear algebra is an area that attracts more and more attention because of the multidimensional structure of the collected data in various applications. Two classification algorithms are given based on the higher order singular value decomposition (HOSVD). The main algorithm makes a data reduction using HOSVD of 98--99 \% prior the construction of the class models. The models are computed as a set of orthonormal bases spanning the dominant subspaces for the different classes. An unknown digit is expressed as a linear combination of the basis vectors. The resulting algorithm achieves 5\% in classification error with fairly low amount of computations.The remaining two papers discuss computational methods for the best multilinearrank approximation problem\[\min_{\cB} \| \cA - \cB\|\]where $\cA$ is a given tensor and we seek the best low multilinear rank approximation tensor $\cB$. This is a generalization of the best low rank matrix approximation problem. It is well known that for matrices the solution is given by truncating the singular values in the singular value decomposition (SVD) of the matrix. But for tensors in general the truncated HOSVD does not give an optimal approximation. For example, a third order tensor $\cB \in \RR^{I \x J \x K}$ with rank$(\cB) = (r_1,r_2,r_3)$ can be written as the product\[\cB = \tml{X,Y,Z}{\cC}, \qquad b_{ijk}=\sum_{\lambda,\mu,\nu}x_{i\lambda} y_{j\mu} z_{k\nu} c_{\lambda\mu\nu},\]where $\cC \in \RR^{r_1 \x r_2 \x r_3}$ and $X \in \RR^{I \times r_1}$, $Y \in \RR^{J \times r_2}$, and $Z \in \RR^{K \times r_3}$ are matrices of full column rank. Since it is no restriction to assume that $X$, $Y$, and $Z$ have orthonormal columns and due to these constraints, the approximation problem can be considered as a nonlinear optimization problem defined on a product of Grassmann manifolds. We introduce novel techniques for multilinear algebraic manipulations enabling means for theoretical analysis and algorithmic implementation. These techniques are used to solve the approximation problem using Newton and Quasi-Newton methods specifically adapted to operate on products of Grassmann manifolds. The presented algorithms are suited for small, large and sparse problems and, when applied on difficult problems, they clearly outperform alternating least squares methods, which are standard in the field.
  •  
9.
  •  
10.
  • Savas, Berkant, 1977-, et al. (author)
  • Clustered low rank approximation of graphs in information science applications
  • 2011
  • In: Proceedings of the SIAM International Conference on Data Mining (SDM). ; , s. 164-175
  • Conference paper (peer-reviewed)abstract
    • In this paper we present a fast and accurate procedure called clusteredlow rank matrix approximation for massive graphs. The procedure involvesa fast clustering of the graph and then approximates each clusterseparately using existing methods, e.g. the singular value decomposition,or stochastic algorithms. The cluster-wise approximations are thenextended to approximate the entire graph. This approach has severalbenets: (1) important community structure of the graph is preserveddue to the clustering; (2) highly accurate low rank approximationsare achieved; (3) the procedure is ecient both in terms of computationalspeed and memory usage; (4) better performance in problems from variousapplications compared to standard low rank approximation. Further,we generalize stochastic algorithms to the clustered low rank approximationframework and present theoretical bounds for the approximation error.Finally, a set of experiments, using large scale and real-world graphs,show that our methods outperform standard low rank matrix approximationalgorithms.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 18

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view