SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Mu Biqiang) "

Sökning: WFRF:(Mu Biqiang)

  • Resultat 1-10 av 16
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Chen, Tianshi, et al. (författare)
  • Regularized LTI System Identification with Multiple Regularization Matrix
  • 2018
  • Ingår i: 18th IFAC Symposium on System Identification (SYSID), Proceedings. - : ELSEVIER SCIENCE BV. ; , s. 180-185
  • Konferensbidrag (refereegranskat)abstract
    • Regularization methods with regularization matrix in quadratic form have received increasing attention. For those methods, the design and tuning of the regularization matrix are two key issues that are closely related. For systems with complicated dynamics, it would be preferable that the designed regularization matrix can bring the hyper-parameter estimation problem certain structure such that a locally optimal solution can be found efficiently. An example of this idea is to use the so-called multiple kernel Chen et al. (2014) for kernel-based regularization methods. In this paper, we propose to use the multiple regularization matrix for the filter-based regularization. Interestingly, the marginal likelihood maximization with the multiple regularization matrix is also a difference of convex programming problem, and a locally optimal solution could be found with sequential convex optimization techniques. (C) 2018, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
  •  
2.
  • Hong, Shiying, et al. (författare)
  • Multiple Kernel Based Regularized System Identification with SURE Hyper-parameter Estimator
  • 2018
  • Ingår i: 18th IFAC Symposium on System Identification (SYSID), Proceedings. - : ELSEVIER SCIENCE BV. ; , s. 13-18
  • Konferensbidrag (refereegranskat)abstract
    • In this work, we study the multiple kernel based regularized system identification with the hyper-parameter estimated by using the Steins unbiased risk estimators (SURE). To approach the problem, a QR factorization is first employed to compute SUREs objective function and its gradient in an efficient and accurate way. Then we propose an algorithm to solve the SURE problem, which contains two parts: the outer optimization part and the inner optimization part. For the outer optimization part, the coordinate descent algorithm is used and for the inner optimization part, the projection gradient algorithm is used. Finally, the efficacy of the proposed algorithm is demonstrated by numerical simulations. (C) 2018, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
  •  
3.
  • Ju, Yue, et al. (författare)
  • Asymptotic Theory for Regularized System Identification Part I: Empirical Bayes Hyperparameter Estimator
  • 2023
  • Ingår i: IEEE Transactions on Automatic Control. - : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 0018-9286 .- 1558-2523. ; 68:12, s. 7224-7239
  • Tidskriftsartikel (refereegranskat)abstract
    • Regularized techniques, also named as kernel-based techniques, are the major advances in system identification in the last decade. Although many promising results have been achieved, their theoretical analysis is far from complete and there are still many key problems to be solved. One of them is the asymptotic theory, which is about convergence properties of the model estimators as the sample size goes to infinity. The existing related results for regularized system identification are about the almost sure convergence of various hyperparameter estimators. A common problem of those results is that they do not contain information on the factors that affect the convergence properties of those hyperparameter estimators, e.g., the regression matrix. In this article, we tackle problems of this kind for the regularized finite impulse response model estimation with the empirical Bayes (EB) hyperparameter estimator and filtered white noise input. In order to expose and find those factors, we study the convergence in distribution of the EB hyperparameter estimator, and the asymptotic distribution of its corresponding model estimator. For illustration, we run Monte Carlo simulations to show the efficacy of our obtained theoretical results.
  •  
4.
  • Ju, Yue, et al. (författare)
  • On Asymptotic Distribution of Generalized Cross Validation Hyper-parameter Estimator for Regularized System Identification
  • 2021
  • Ingår i: 2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC). - : IEEE. - 9781665436595 ; , s. 1598-1602
  • Konferensbidrag (refereegranskat)abstract
    • Asymptotic theory is one of the core subjects in system identification theory and often used to assess properties of model estimators. In this paper, we focus on the asymptotic theory for the kernel-based regularized system identification and study the convergence in distribution of the generalized cross validation (GCV) based hyper-parameter estimator. It is shown that the difference between the GCV based hyper-parameter estimator and the optimal hyper-parameter estimator that minimizes the mean square error scaled by 1/root N converges in distribution to a zero mean Gaussian distribution, where N is the sample size and an expression of covariance matrix is obtained. In particular, for the ridge regression case, a closed-form expression of the variance is obtained and shows the influence of the limit of the regression matrix on the asymptotic distribution. For illustration, Monte Carlo numerical simulations are run to test our theoretical results.
  •  
5.
  • Ju, Yue, et al. (författare)
  • On Convergence in Distribution of Steins Unbiased Risk Hyper-parameter Estimator for Regularized System Identification
  • 2022
  • Ingår i: 2022 41ST CHINESE CONTROL CONFERENCE (CCC). - : IEEE. - 9789887581536 - 9781665482561 ; , s. 1491-1496
  • Konferensbidrag (refereegranskat)abstract
    • Asymptotic theory for the regularized system identification has received increasing interests in recent years. In this paper, for the finite impulse response (FIR) model and filtered white noise inputs, we show the convergence in distribution of the Steins unbiased risk estimator (SURE) based hyper-parameter estimator and find factors that influence its convergence properties. In particular, we consider the ridge regression case to obtain closed-form expressions of the limit of the regression matrix and the variance of the limiting distribution of the SURE based hyper-parameter estimator, and then demonstrate their relation numerically.
  •  
6.
  • Ju, Yue, et al. (författare)
  • On the Influence of Ill-conditioned Regression Matrix on Hyper-parameter Estimators for Kernel-based Regularization Methods
  • 2020
  • Ingår i: 2020 59TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC). - : IEEE. - 9781728174471 ; , s. 300-305
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we study the influence of ill-conditioned regression matrix on two hyper-parameter estimation methods for the kernel-based regularization method: the empirical Bayes (EB) and the Steins unbiased risk estimator (SURE). First, we consider the convergence rate of the cost functions of EB and SURE, and we find that they have the same convergence rate but the influence of the ill-conditioned regression matrix on the scale factor are different: for upper bounds, the scale factor for SURE contains one more factor cond(Phi(T)Phi) than that of EB, where Phi is the regression matrix and cond(.) denotes the condition number of a matrix. This finding indicates that when Phi is ill-conditioned, i.e., cond(Phi(T)Phi) is large, the cost function of SURE converges slower than that of EB. Then we consider the convergence rate of the optimal hyper-parameters of EB and SURE, and we find that they are both asymptotically normally distributed and have the same convergence rate, but the influence of the ill-conditioned regression matrix on the scale factor are different. In particular, for the ridge regression case, we show that the optimal hyper-parameter of SURE converges slower than that of EB with a factor of 1/n(2), as cond(Phi(T)Phi) goes to infinity, where n is the FIR model order.
  •  
7.
  • Ljung, Lennart, et al. (författare)
  • A shift in paradigm for system identification
  • 2020
  • Ingår i: International Journal of Control. - : TAYLOR & FRANCIS LTD. - 0020-7179 .- 1366-5820. ; 93:2, s. 173-180
  • Tidskriftsartikel (refereegranskat)abstract
    • System identification is a mature research area with well established paradigms, mostly based on classical statistical methods. Recently, there has been considerable interest in so called kernel-based regularisation methods applied to system identification problem. The recent literature on this is extensive and at times difficult to digest. The purpose of this contribution is to provide an accessible account of the main ideas and results of kernel-based regularisation methods for system identification. The focus is to assess the impact of these new techniques on the field and traditional paradigms.
  •  
8.
  • Mu, Biqiang, et al. (författare)
  • Asymptotic Properties of Generalized Cross Validation Estimators for Regularized System Identification
  • 2018
  • Ingår i: 18th IFAC Symposium on System Identification (SYSID), Proceedings. - : ELSEVIER SCIENCE BV. ; , s. 203-208
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we study the asymptotic properties of the generalized cross validation (GCV) hyperparameter estimator and establish its connection with the Steins unbiased risk estimators (SURE) as well as the mean squared error (MSE). It is shown that as the number of data goes to infinity, the GCV has the same asymptotic property as the SURE does and both of them converge to the best hyperparameter in the MSE sense. We illustrate the efficacy of the result by Monte Carlo simulations. (C) 2018, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
  •  
9.
  • Mu, Biqiang, et al. (författare)
  • Asymptotic Properties of Hyperparameter Estimators by Using Cross-Validations for Regularized System Identification
  • 2018
  • Ingår i: 2018 IEEE CONFERENCE ON DECISION AND CONTROL (CDC). - : IEEE. - 9781538613955 ; , s. 644-649
  • Konferensbidrag (refereegranskat)abstract
    • This paper studies the asymptotic properties of the hyperparameter estimators including the leave-k-out cross validation (LKOCV) and r-fold cross validation (RFCV), and discloses their relation with the Steins unbiased risk estimators (SURE) as well as the mean squared error (MSE). It is shown that as the number of data goes to infinity, the LKOCV shares the same asymptotic best hyperparameter minimizing the MSE estimator as the SURE does if the input is bounded and the ratio between the training data and the whole data tends to zero. We illustrate the efficacy of the theoretical result by Monte Carlo simulations.
  •  
10.
  • Mu, Biqiang, 1986-, et al. (författare)
  • On asymptotic properties of hyperparameter estimators for kernel-based regularization methods
  • 2018
  • Ingår i: Automatica. - : PERGAMON-ELSEVIER SCIENCE LTD. - 0005-1098 .- 1873-2836. ; 94, s. 381-395
  • Tidskriftsartikel (refereegranskat)abstract
    • The kernel-based regularization method has two core issues: kernel design and hyperparameter estimation. In this paper, we focus on the second issue and study the properties of several hyperparameter estimators including the empirical Bayes (EB) estimator, two Steins unbiased risk estimators (SURE) (one related to impulse response reconstruction and the other related to output prediction) and their corresponding Oracle counterparts, with an emphasis on the asymptotic properties of these hyperparameter estimators. To this goal, we first derive and then rewrite the first order optimality conditions of these hyperparameter estimators, leading to several insights on these hyperparameter estimators. Then we show that as the number of data goes to infinity, the two SUREs converge to the best hyperparameter minimizing the corresponding mean square error, respectively, while the more widely used EB estimator converges to another best hyperparameter minimizing the expectation of the EB estimation criterion. This indicates that the two SUREs are asymptotically optimal in the corresponding MSE senses but the EB estimator is not. Surprisingly, the convergence rate of two SUREs is slower than that of the EB estimator, and moreover, unlike the two SUREs, the EB estimator is independent of the convergence rate of Phi(T)Phi/N to its limit, where Phi is the regression matrix and N is the number of data. A Monte Carlo simulation is provided to demonstrate the theoretical results. (C) 2018 Elsevier Ltd. All rights reserved.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 16

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy