SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Quttineh Nils Hassan) srt2:(2006-2009)"

Sökning: WFRF:(Quttineh Nils Hassan) > (2006-2009)

  • Resultat 1-9 av 9
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Holmström, Kenneth, 1954-, et al. (författare)
  • Adaptive Radial Basis Algorithms (ARBF) for Expensive Black-Box Global MINLP Optimization
  • 2008
  • Ingår i: SIOPT08 - Abstracts. ; , s. 132-
  • Konferensbidrag (refereegranskat)abstract
    • Improvements of the adaptive radial basis function algo-rithm (ARBF) for computationally costly optimization are presented. A new target value search algorithm and modifications to improve robustness and speed are discussed. The algoritm is implemented in solver ARBFMIP in the TOM-LAB Optimization Environment (http://tomopt.com/). Solvers in TOMLAB are used to solve global and local subproblems. Results and comparisons with other solvers are presented for global optimization test problems. Performance on costly real-life applications are reported.
  •  
2.
  • Holmström, Kenneth, 1954-, et al. (författare)
  • An adaptive radial basis algorithm (ARBF) for expensive black-box mixed-integer constrained global optimization
  • 2008
  • Ingår i: Optimization and Engineering. - : Springer US. - 1389-4420 .- 1573-2924. ; 9:4, s. 311-339
  • Tidskriftsartikel (refereegranskat)abstract
    • Response surface methods based on kriging and radial basis function (RBF) interpolationhave been successfully applied to solve expensive, i.e. computationally costly,global black-box nonconvex optimization problems.In this paper we describe extensions of these methods to handle linear, nonlinear, and integer constraints. In particular, algorithms for standard RBF and the new adaptive RBF (ARBF) aredescribed. Note, however, while the objective function may be expensive, we assume that any nonlinear constraints are either inexpensive or are incorporated into the objective function via penalty terms. Test results are presented on standard test problems, both nonconvexproblems with linear and nonlinear constraints, and mixed-integernonlinear problems (MINLP). Solvers in the TOMLAB OptimizationEnvironment (http://tomopt.com/tomlab/) have been compared,specifically the three deterministic derivative-free solversrbfSolve, ARBFMIP and EGO with three derivative-based mixed-integernonlinear solvers, OQNLP, MINLPBB and MISQP, as well as the GENOsolver implementing a stochastic genetic algorithm. Results showthat the deterministic derivative-free methods compare well with thederivative-based ones, but the stochastic genetic algorithm solver isseveral orders of magnitude too slow for practical use.When the objective function for the test problems is costly to evaluate, the performance of the ARBF algorithm proves to be superior.
  •  
3.
  •  
4.
  • Holmström, Kenneth, 1954-, et al. (författare)
  • The influence of Experimental Designs on the Performance of Surrogate Model Based Costly Global Optimization Solvers
  • 2009
  • Ingår i: Studies in Informatics and Control. - : National Institute for Research & Development in Informatics. - 1220-1766 .- 1841-429X. ; 18:1, s. 87-95
  • Tidskriftsartikel (refereegranskat)abstract
    • When dealing with costly objective functions in optimization, one good alternative is to use a surrogate model approach. A common feature for all such methods is the need of an initial set of points, or "experimental design", in order to start the algorithm. Since the behavior of the algorithms often depends heavily on this set, the question is how to choose a good experimental design. We investigate this by solving a number of problems using different designs, and compare the outcome with respect to function evaluations and a root mean square error test of the true function versus the surrogate model produced. Each combination of problem and design is solved by 3 different solvers available in the TOMLAB optimization environment. Results indicate two designs as superior.
  •  
5.
  • Quttineh, Nils-Hassan, et al. (författare)
  • Adaptive Radial Basis Algorithm (ARBF) for Expensive Black-Box Mixed-Integer Constrained Global Optimization
  • 2007
  • Ingår i: 2nd Mathematical Programming SocietyInternational Conference on Continuous Optimization ICCOPT 07 - MOPTA 07. ; , s. 30-
  • Konferensbidrag (refereegranskat)abstract
    • Response surface methods based on kriging and radial basis function (RBF) interpolation have been successfully applied to solve expensive, i.e. com-putationally costly, global black-box nonconvex optimization problems. We describe extensions of these methods to handle linear, nonlinear and integer constraints. In particular standard RBF and new adaptive RBF (ARBF) algorithms are discussed. Test results are presented on standard test problems, both nonconvex problems with linear and nonlinear constraints, and mixed-integer nonlinear problems. Solvers in the TOMLAB Optimization Environment (http://tomopt.com/tomlab/) are compared; the three deterministic derivative-free solvers rbfSolve, ARBFMIP and EGO with three derivative-based mixed-integer nonlinear solvers, OQNLP, MINLPBB and MISQP as well as GENO implementing a stochastic genetic algorithm. Assuming that the objective function is costly to evaluate the performance of the ARBF algorithm proves to be superior.
  •  
6.
  •  
7.
  • Quttineh, Nils-Hassan, 1979- (författare)
  • Algorithms for Costly Global Optimization
  • 2009
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • There exists many applications with so-called costly problems, which means that the objective function you want to maximize or minimize cannot be described using standard functions and expressions. Instead one considers these objective functions as ``black box'' where the parameter values are sent in and a function value is returned. This implies in particular that no derivative information is available.The reason for describing these problems as expensive is that it may take a long time to calculate a single function value. The black box could, for example, solve a large system of differential equations or carrying out a heavy simulation, which can take anywhere from several minutes to several hours!These very special conditions therefore requires customized algorithms. Common optimization algorithms are based on calculating function values every now and then, which usually can be done instantly. But with an expensive problem, it may take several hours to compute a single function value. Our main objective is therefore to create algorithms that exploit all available information to the limit before a new function value is calculated. Or in other words, we want to find the optimal solution using as few function evaluations as possible.A good example of real life applications comes from the automotive industry, where on the development of new engines utilize advanced models that are governed by a dozen key parameters. The goal is to optimize the model by changing the parameters in such a way that the engine becomes as energy efficient as possible, but still meets all sorts of demands on strength and external constraints.
  •  
8.
  • Quttineh, Nils-Hassan, 1979-, et al. (författare)
  • Implementation of a One-Stage Efficient Global Optimization (EGO) Algorithm
  • 2009
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • Almost every Costly Global Optimization (CGO) solver utilizes a surrogate model, or response surface, to approximate the true (costly) function. The EGO algorithm introduced by Jones et al. utilizes the DACE framework to build an approximating surrogate model. By optimizing a less costly utility function, the algorithm determines a new point where the original objective function is evaluated. This is repeated until some convergence criteria is fulfilled.The original EGO algorithm finds the new point to sample in a two-stage process. In its first stage, the estimates of the interpolation parameters are optimized with respect to already sampled points. In the second stage, these estimated values are considered true in order to optimize the location of the new point. The use of estimate values as correct introduces a source of error.Instead, in the One-stage EGO algorithm, both parameter values and the location of a new point are optimized at the same time, removing the source of error. This new subproblem becomes more difficult, but eliminates the need of solving two subproblems.Difficulties in implementing a fast and robust One-Stage EGO algorithm in TOMLAB are discussed, especially the solution of the new subproblem.
  •  
9.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-9 av 9

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy