SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Andersson NG) "

Sökning: WFRF:(Andersson NG)

  • Resultat 1-50 av 67
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Thomas, HS, et al. (författare)
  • 2019
  • swepub:Mat__t
  •  
2.
  •  
3.
  •  
4.
  • 2017
  • swepub:Mat__t
  •  
5.
  • Abel, I, et al. (författare)
  • Overview of the JET results with the ITER-like wall
  • 2013
  • Ingår i: Nuclear Fusion. - : IOP Publishing. - 1741-4326 .- 0029-5515. ; 53:10, s. 104002-
  • Tidskriftsartikel (refereegranskat)abstract
    • Following the completion in May 2011 of the shutdown for the installation of the beryllium wall and the tungsten divertor, the first set of JET campaigns have addressed the investigation of the retention properties and the development of operational scenarios with the new plasma-facing materials. The large reduction in the carbon content (more than a factor ten) led to a much lower Z(eff) (1.2-1.4) during L- and H-mode plasmas, and radiation during the burn-through phase of the plasma initiation with the consequence that breakdown failures are almost absent. Gas balance experiments have shown that the fuel retention rate with the new wall is substantially reduced with respect to the C wall. The re-establishment of the baseline H-mode and hybrid scenarios compatible with the new wall has required an optimization of the control of metallic impurity sources and heat loads. Stable type-I ELMy H-mode regimes with H-98,H-y2 close to 1 and beta(N) similar to 1.6 have been achieved using gas injection. ELM frequency is a key factor for the control of the metallic impurity accumulation. Pedestal temperatures tend to be lower with the new wall, leading to reduced confinement, but nitrogen seeding restores high pedestal temperatures and confinement. Compared with the carbon wall, major disruptions with the new wall show a lower radiated power and a slower current quench. The higher heat loads on Be wall plasma-facing components due to lower radiation made the routine use of massive gas injection for disruption mitigation essential.
  •  
6.
  •  
7.
  • Amouzgar, Kaveh, 1980-, et al. (författare)
  • A framework for simulation-based multi-objective optimization and knowledge discovery of machining process
  • 2018
  • Ingår i: The International Journal of Advanced Manufacturing Technology. - : Springer Science and Business Media LLC. - 0268-3768 .- 1433-3015. ; 98:9-12, s. 2469-2486
  • Tidskriftsartikel (refereegranskat)abstract
    • The current study presents an effective framework for automated multi-objective optimization (MOO) of machining processes by using finite element (FE) simulations. The framework is demonstrated by optimizing a metal cutting process in turning AISI-1045, using an uncoated K10 tungsten carbide tool. The aim of the MOO is to minimize tool-chip interface temperature and tool wear depth, that are extracted from FE simulations, while maximizing the material removal rate. The effect of tool geometry parameters, i.e., clearance angle, rake angle, and cutting edge radius, and process parameters, i.e., cutting speed and feed rate on the objective functions are explored. Strength Pareto Evolutionary Algorithm (SPEA2) is adopted for the study. The framework integrates and connects several modules to completely automate the entire MOO process. The capability of performing the MOO in parallel is also enabled by adopting the framework. Basically, automation and parallel computing, accounts for the practicality of MOO by using FE simulations. The trade-off solutions obtained by MOO are presented. A knowledge discovery study is carried out on the trade-off solutions. The non-dominated solutions are analyzed using a recently proposed data mining technique to gain a deeper understanding of the turning process.
  •  
8.
  • Amouzgar, Kaveh, 1980-, et al. (författare)
  • Metamodel based multi-objective optimization of a turning process by using finite element simulation
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • This study investigates the advantages and potentials of the metamodelbased multi-objective optimization (MOO) of a turning operation through the application of finite element simulations and evolutionary algorithms to a metal cutting process. The objectives are minimizing the interface temperature and tool wear depth obtained from FE simulations using DEFORM2D software, and maximizing the material removal rate. Tool geometry and process parameters are considered as the input variables. Seven metamodelling methods are employed and evaluated, based on accuracy and suitability. Radial basis functions with a priori bias and Kriging are chosen to model tool–chip interface temperature and tool wear depth, respectively. The non-dominated solutions are found using the strength Pareto evolutionary algorithm SPEA2 and compared with the non-dominated front obtained from pure simulation-based MOO. The metamodel-based MOO method is not only advantageous in terms of reducing the computational time by 70%, but is also able to discover 31 new non-dominated solutions over simulation-based MOO.
  •  
9.
  • Amouzgar, Kaveh, 1980-, et al. (författare)
  • Metamodel based multi-objective optimization of a turning process by using finite element simulation
  • 2020
  • Ingår i: Engineering optimization (Print). - : Taylor & Francis Group. - 0305-215X .- 1029-0273. ; 52:7, s. 1261-1278
  • Tidskriftsartikel (refereegranskat)abstract
    • This study investigates the advantages and potentials of the metamodelbased multi-objective optimization (MOO) of a turning operation through the application of finite element simulations and evolutionary algorithms to a metal cutting process. The objectives are minimizing the interface temperature and tool wear depth obtained from FE simulations using DEFORM2D software, and maximizing the material removal rate. Tool geometry and process parameters are considered as the input variables. Seven metamodelling methods are employed and evaluated, based on accuracy and suitability. Radial basis functions with a priori bias and Kriging are chosen to model tool–chip interface temperature and tool wear depth, respectively. The non-dominated solutions are found using the strength Pareto evolutionary algorithm SPEA2 and compared with the non-dominated front obtained from pure simulation-based MOO. The metamodel-based MOO method is not only advantageous in terms of reducing the computational time by 70%, but is also able to discover 31 new non-dominated solutions over simulation-based MOO.
  •  
10.
  • Amouzgar, Kaveh, 1980- (författare)
  • Metamodel Based Multi-Objective Optimization with Finite-Element Applications
  • 2018
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • As a result of the increase in accessibility of computational resources and the increase of computer power during the last two decades, designers are able to create computer models to simulate the behavior of complex products. To address global competitiveness, companies are forced to optimize the design of their products and production processes. Optimizing the design and production very often need several runs of computationally expensive simulation models. Therefore, integrating metamodels, as an efficient and sufficiently accurate approximate of the simulation model, with optimization algorithms is necessary. Furthermore, in most of engineering problems, more than one objective function has to be optimized, leading to multi-objective optimization(MOO). However, the urge to employ metamodels in MOO, i.e., metamodel based MOO (MB-MOO), is more substantial.Radial basis functions (RBF) is one of the most popular metamodeling methods. In this thesis, a new approach to constructing RBF with the bias to beset a priori by using the normal equation is proposed. The performance of the suggested approach is compared to the classic RBF and four other well-known metamodeling methods, in terms of accuracy, efficiency and, most importantly, suitability for integration with MOO evolutionary algorithms. It has been found that the proposed approach is accurate in most of the test functions, and it was the fastest compared to other methods. Additionally, the new approach is the most suitable method for MB-MOO, when integrated with evolutionary algorithms. The proposed approach is integrated with the strength Pareto evolutionary algorithm (SPEA2) and applied to two real-world engineering problems: MB-MOO of the disk brake system of a heavy truck, and the metal cutting process in a turning operation. Thereafter, the Pareto-optimal fronts are obtained and the results are presented. The MB-MOO in both case studies has been found to be an efficient and effective method. To validate the results of the latter MB-MOO case study, a framework for automated finite element (FE) simulation based MOO (SB-MOO) of machining processes is developed and presented by applying it to the same metal cutting process in a turning operation. It has been proved that the framework is effective in achieving the MOO of machining processes based on actual FE simulations.
  •  
11.
  •  
12.
  •  
13.
  • Andersson, Martin, 1981- (författare)
  • A bilevel approach to parameter tuning of optimization algorithms using evolutionary computing : Understanding optimization algorithms through optimization
  • 2018
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Most optimization problems found in the real world cannot be solved using analytical methods. For these types of difficult optimization problems, an alternative approach is needed. Metaheuristics are a category of optimization algorithms that do not guarantee that an optimal solution will be found, but instead search for the best solutions using some general heuristics. Metaheuristics have been shown to be effective at finding “good-enough” solutions to a wide variety of difficult problems. Most metaheuristics involve control parameters that can be used to modify how the heuristics perform its search. This is necessary because different problems may require different search strategies to be solved effectively. The control parameters allow for the optimization algorithm to be adapted to the problem at hand. It is, however, difficult to predict what the optimal control parameters are for any given problem. The problem of finding these optimal control parameter values is known as parameter tuning and is the main topic of this thesis. This thesis uses a bilevel optimization approach to solve parameter tuning problems. In this approach, the parameter tuning problem itself is formulated as an optimization problem and solved with an optimization algorithm. The parameter tuning problem formulated as a bilevel optimization problem is challenging because of nonlinear objective functions, interacting variables, multiple local optima, and noise. However, it is in precisely this kind of difficult optimization problem that evolutionary algorithms, which are a subclass of metaheuristics, have been shown to be effective. That is the motivation for using evolutionary algorithms for the upper-level optimization (i.e. tuning algorithm) of the bilevel optimization approach. Solving the parameter tuning problem using a bilevel optimization approach is also computationally expensive, since a complete optimization run has to be completed for every evaluation of a set of control parameter values. It is therefore important that the tuning algorithm be as efficient as possible, so that the parameter tuning problem can be solved to a satisfactory level with relatively few evaluations. Even so, bilevel optimization experiments can take a long time to run on a single computer. There is, however, considerable parallelization potential in the bilevel optimization approach, since many of the optimizations are independent of one another. This thesis has three primary aims: first, to present a bilevel optimization framework and software architecture for parallel parameter tuning; second, to use this framework and software architecture to evaluate and configure evolutionary algorithms as tuners and compare them with other parameter tuning methods; and, finally, to use parameter tuning experiments to gain new insights into and understanding of how optimization algorithms work and how they can used be to their maximum potential. The proposed framework and software architecture have been implemented and deployed in more than one hundred computers running many thousands of parameter tuning experiments for many millions of optimizations. This illustrates that this design and implementation approach can handle large parameter tuning experiments. Two types of evolutionary algorithms, i.e. differential evolution (DE) and a genetic algorithm (GA), have been evaluated as tuners against the parameter tuning algorithm irace. The as pects of algorithm configuration and noise handling for DE and the GA as related to the parameter tuning problem were also investigated. The results indicate that dynamic resampling strategies outperform static resampling strategies. It was also shown that the GA needs an explicit exploration and exploitation strategy in order not become stuck in local optima. The comparison with irace shows that both DE and the GA can significantly outperform it in a variety of different tuning problems.
  •  
14.
  • Andersson, Martin, 1981-, et al. (författare)
  • A Parallel Computing Software Architecture for the Bilevel Parameter Tuning of Optimization Algorithms
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Most optimization algorithms extract important algorithmic design decisions as control parameters. This is necessary because different problems can require different search strategies to be solved effectively. The control parameters allow for the optimization algorithm to be adapted to the problem at hand. It is however difficult to predict what the optimal control parameters are for any given problem. Finding these optimal control parameter values is referred to as the parameter tuning problem. One approach of solving the parameter tuning problem is to use a bilevel optimization where the parameter tuning problem itself is formulated as an optimization problem involving algorithmic performance as the objective(s). In this paper, we present a framework and architecture that can be used to solve large-scale parameter tuning problems using a bilevel optimization approach. The proposed framework is used to show that evolutionary algorithms are competitive as tuners against irace which is a state-of-the-art tuning method. Two evolutionary algorithms, differential evaluation (DE) and a genetic algorithm (GA) are evaluated as tuner algorithms using the proposed framework and software architecture. The importance of replicating optimizations and avoiding local optima is also investigated. The architecture is deployed and tested by running millions of optimizations using a computing cluster. The results indicate that the evolutionary algorithms can consistently find better control parameter values than irace. The GA, however, needs to be configured for an explicit exploration and exploitation strategy in order avoid local optima.
  •  
15.
  • Andersson, Marcus, et al. (författare)
  • A web-based simulation optimization system for industrial scheduling
  • 2007
  • Ingår i: Proceedings of the 39th conference on Winter simulation. - : IEEE Press. - 1424413060 ; , s. 1844-1852
  • Konferensbidrag (refereegranskat)abstract
    • Many real-world production systems are complex in nature and it is a real challenge to find an efficient scheduling method that satisfies the production requirements as well as utilizes the resources efficiently. Tools like discrete event simulation (DES) are very useful for modeling these systems and can be used to test and compare different schedules before dispatching the best schedules to the targeted systems. DES alone, however, cannot be used to find the "optimal" schedule. Simulation-based optimization (SO) can be used to search for optimal schedules efficiently without too much user intervention. Observing that long computing time may prohibit the interest in using SO for industrial scheduling, various techniques to speed up the SO process have to be explored. This paper presents a case study that shows the use of a Web-based parallel and distributed SO platform to support the operations scheduling of a machining line in an automotive factory.
  •  
16.
  • Andersson, Martin, 1981-, et al. (författare)
  • Evolutionary Simulation Optimization of Personnel Scheduling
  • 2014
  • Ingår i: 12th International Industrial Simulation Conference 2014. - : Eurosis. - 9789077381830 ; , s. 61-65
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents a simulation-optimization system for personnel scheduling. The system is developed for the Swedish postal services and aims at finding personnel schedules that minimizes both total man hours and the administrative burden of the person responsible for handling schedules. For the optimization, the multi-objective evolutionary algorithm NSGA-II is implemented. The simulation-optimization system is evaluated on a real-world test case and results from the evaluation shows that the algorithm is successful in optimizing the problem.
  •  
17.
  • Andersson, Martin, 1981-, et al. (författare)
  • On the Trade-off Between Runtime and Evaluation Efficiency In Evolutionary Algorithms
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Evolutionary optimization algorithms typically use one or more parameters that control their behavior. These parameters, which are often kept constant, can be tuned to improve the performance of the algorithm on specific problems.  However, past studies have indicated that the performance can be further improved by adapting the parameters during runtime. A limitation of these studies is that they only control, at most, a few parameters, thereby missing potentially beneficial interactions between them. Instead of finding a direct control mechanism, the novel approach in this paper is to use different parameter sets in different stages of an optimization. These multiple parameter sets, which remain static within each stage, are tuned through extensive bi-level optimization experiments that approximate the optimal adaptation of the parameters. The algorithmic performance obtained with tuned multiple parameter sets is compared against that obtained with a single parameter set.  For the experiments in this paper, the parameters of NSGAII are tuned when applied to the ZDT, DTLZ and WFG test problems. The results show that using multiple parameter sets can significantly increase the performance over a single parameter set.
  •  
18.
  • Andersson, Martin, 1981-, et al. (författare)
  • Parameter tuned CMA-ES on the CEC'15 expensive problems
  • 2015
  • Ingår i: 2015 IEEE Congress on Evolutionary Computation (CEC). - : IEEE conference proceedings. - 9781479974924 - 9781479974917 ; , s. 1950-1957
  • Konferensbidrag (refereegranskat)abstract
    • Evolutionary optimization algorithms have parameters that are used to adapt the search strategy to suit different optimization problems. Selecting the optimal parameter values for a given problem is difficult without a-priori knowledge. Experimental studies can provide this knowledge by finding the best parameter values for a specific set of problems. This knowledge can also be constructed into heuristics (rule-of-thumbs) that can adapt the parameters for the problem. The aim of this paper is to assess the heuristics of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimization algorithm. This is accomplished by tuning CMA-ES parameters so as to maximize its performance on the CEC'15 problems, using a bilevel optimization approach that searches for the optimal parameter values. The optimized parameter values are compared against the parameter values suggested by the heuristics. The difference between specialized and generalized parameter values are also investigated.
  •  
19.
  • Andersson, Martin, 1981-, et al. (författare)
  • Parameter Tuning Evolutionary Algorithms for Runtime versus Cost Trade-off in a Cloud Computing Environment
  • 2018
  • Ingår i: Simulation Modelling Practice and Theory. - : Elsevier. - 1569-190X. ; 89, s. 195-205
  • Tidskriftsartikel (refereegranskat)abstract
    • The runtime of an evolutionary algorithm can be reduced by increasing the number of parallel evaluations. However, increasing the number of parallel evaluations can also result in wasted computational effort since there is a greater probability of creating solutions that do not contribute to convergence towards the global optimum. A trade-off, therefore, arises between the runtime and computational effort for different levels of parallelization of an evolutionary algorithm.  When the computational effort is translated into cost, the trade-off can be restated as runtime versus cost. This trade-off is particularly relevant for cloud computing environments where the computing resources can be exactly matched to the level of parallelization of the algorithm, and the cost is proportional to the runtime and how many instances that are used. This paper empirically investigates this trade-off for two different evolutionary algorithms, NSGA-II and differential evolution (DE) when applied to multi-objective discrete-event simulation-based (DES) problem. Both generational and steadystate asynchronous versions of both algorithms are included. The approach is to perform parameter tuning on a simplified version of the DES model. A subset of the best configurations from each tuning experiment is then evaluated on a cloud computing platform. The results indicate that, for the included DES problem, the steady-state asynchronous version of each algorithm provides a better runtime versus cost trade-off than the generational versions and that DE outperforms NSGA-II.
  •  
20.
  • Andersson, Martin, 1981-, et al. (författare)
  • Parameter Tuning of MOEAs Using a Bilevel Optimization Approach
  • 2015
  • Ingår i: Evolutionary Multi-Criterion Optimization. - Cham : Springer International Publishing Switzerland. - 9783319159331 - 9783319159348 ; , s. 233-247
  • Konferensbidrag (refereegranskat)abstract
    • The performance of an Evolutionary Algorithm (EA) can be greatly influenced by its parameters. The optimal parameter settings are also not necessarily the same across different problems. Finding the optimal set of parameters is therefore a difficult and often time-consuming task. This paper presents results of parameter tuning experiments on the NSGA-II and NSGA-III algorithms using the ZDT test problems. The aim is to gain new insights on the characteristics of the optimal parameter settings and to study if the parameters impose the same effect on both NSGA-II and NSGA-III. The experiments also aim at testing if the rule of thumb that the mutation probability should be set to one divided by the number of decision variables is a good heuristic on the ZDT problems. A comparison of the performance of NSGA-II and NSGA-III on the ZDT problems is also made.
  •  
21.
  •  
22.
  • Andersson, Marcus, et al. (författare)
  • Simulation Optimization for Industrial Scheduling Using Hybrid Genetic Representation
  • 2008
  • Ingår i: Proceedings of the 2008 Winter Simulation Conference. - : IEEE conference proceedings. - 9781424427086 ; , s. 2004-2011
  • Konferensbidrag (refereegranskat)abstract
    • Simulation modeling has the capability to represent complex real-world systems in details and therefore it is suitable to develop simulation models for generating detailed operation plans to control the shop floor. In the literature, there are two major approaches for tackling the simulation-based scheduling problems, namely dispatching rules and using meta-heuristic search algorithms. The purpose of this paper is to illustrate that there are advantages when these two approaches are combined. More precisely, this paper introduces a novel hybrid genetic representation as a combination of both a partially completed schedule (direct) and the optimal dispatching rules (indirect), for setting the schedules for some critical stages (e.g. bottlenecks) and other non-critical stages respectively. When applied to an industrial case study, this hybrid method has been found to outperform the two common approaches, in terms of finding reasonably good solutions within a shorter time period for most of the complex scheduling scenarios.
  •  
23.
  • Andersson, Martin, et al. (författare)
  • Towards Optimal Algorithmic Parameters for Simulation-Based Multi-Objective Optimization
  • 2016
  • Ingår i: 2016 IEEE Congress on Evolutionary Computation (CEC). - New York : IEEE. - 9781509006236 - 9781509006229 - 9781509006243 ; , s. 5162-5169
  • Konferensbidrag (refereegranskat)abstract
    • The use of optimization to solve a simulation-based multi-objective problem produces a set of solutions that provide information about the trade-offs that have to be considered by the decision maker. An incomplete or sub-optimal set of solutions will negatively affect the quality of any subsequent decisions. The parameters that control the search behavior of an optimization algorithm can be used to minimize this risk. However, choosing good parameter settings for a given optimization algorithm and problem combination is difficult. The aim of this paper is to take a step towards optimal parameter settings for optimization of simulation-based problems. Two parameter tuning methods, Latin Hypercube Sampling and Genetic Algorithms, are used to maximize the performance of NSGA-II applied to a simulation-based problem with discrete variables. The strengths and weaknesses of both methods are analyzed. The effect of the number of decision variables and the function budget on the optimal parameter settings is also studied.
  •  
24.
  • Andersson, Martin, et al. (författare)
  • Tuning of Multiple Parameter Sets in Evolutionary Algorithms
  • 2016
  • Ingår i: GECCO'16. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450342063 ; , s. 533-540
  • Konferensbidrag (refereegranskat)abstract
    • Evolutionary optimization algorithms typically use one or more parameters that control their behavior. These parameters, which are often kept constant, can be tuned to improve the performance of the algorithm on specific problems. However, past studies have indicated that the performance can be further improved by adapting the parameters during runtime. A limitation of these studies is that they only control, at most, a few parameters, thereby missing potentially beneficial interactions between them. Instead of finding a direct control mechanism, the novel approach in this paper is to use different parameter sets in different stages of an optimization. These multiple parameter sets, which remain static within each stage, are tuned through extensive bi-level optimization experiments that approximate the optimal adaptation of the parameters. The algorithmic performance obtained with tuned multiple parameter sets is compared against that obtained with a single parameter set. For the experiments in this paper, the parameters of NSGA-II are tuned when applied to the ZDT, DTLZ and WFG test problems. The results show that using multiple parameter sets can significantly increase the performance over a single parameter set.
  •  
25.
  •  
26.
  • Andersson, V., et al. (författare)
  • Large-Area Balloon-Borne Polarized Gamma Ray Observer (PoGO)
  • 2005
  • Ingår i: Proceedings of the 22nd Texas Symposium on Relativistic Astrophysics at Stanford. ; , s. 736-743
  • Konferensbidrag (refereegranskat)abstract
    • We are developing a new balloon-borne instrument (PoGO), to measure polarization of soft gamma rays (30-200 keV) using asymmetry in azimuth angle distribution of Compton scattering. PoGO is designed to detect 10 % polarization in 100mCrab sources in a 6-8 hour observation and bring a new dimension to studies on gamma ray emission/transportation mechanism in pulsars, AGNs, black hole binaries, and neutron star surface. The concept is an adaptation to polarization measurements of well-type phoswich counter consisting of a fast plastic scintillator (the detection part), a slow plastic scintillator (the active collimator) and a BGO scintillator (the bottom anti-counter). PoGO consists of close-packed array of 217 hexagonal well-type phoswich counters and has a narrow field-of-view (~ 5 deg2) to reduce possible source confusion. A prototype instrument has been tested in the polarized soft gamma-ray beams at Advanced Photon Source (ANL) and at Photon Factory (KEK). On the results, the polarization dependence of EGS4 has been validated and that of Geant4 has been corrected.
  •  
27.
  •  
28.
  •  
29.
  • Bandaru, Sunith, et al. (författare)
  • Metamodel-based prediction of performance metrics for bilevel parameter tuning in MOEAs
  • 2016
  • Ingår i: 2016 IEEE Congress on Evolutionary Computation (CEC)<em></em>. - New York : IEEE. - 9781509006236 - 9781509006229 - 9781509006243 ; , s. 1909-1916
  • Konferensbidrag (refereegranskat)abstract
    • We consider a bilevel parameter tuning problem where the goal is to maximize the performance of a given multi-objective evolutionary optimizer on a given problem. The search for optimal algorithmic parameters requires the assessment of several sets of parameters, through multiple optimization runs, in order to mitigate the effect of noise that is inherent to evolutionary algorithms. This task is computationally expensive and therefore, in this paper, we propose to use sampling and metamodeling to approximate the performance of the optimizer as a function of its parameters. While such an approach is not unheard of, the choice of the metamodel to be used still remains unclear. The aim of this paper is to empirically compare 11 different metamodeling techniques with respect to their accuracy and training times in predicting two popular multi-objective performance metrics, namely, the hypervolume and the inverted generational distance. For the experiments in this pilot study, NSGA-II is used as the multi-objective optimizer for solving ZDT problems, 1 through 4.
  •  
30.
  • Cervin, Nicholas, et al. (författare)
  • Lightweight and strong cellulose materials made from aqueous foams stabilized by NanoFibrillated Cellulose (NFC)
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    •  A novel, lightweight and strong porous cellulose material has been prepared by drying aqueous foams stabilized with surface-modified NanoFibrillated Cellulose (NFC). This material differs from other particle stabilized foams in that we use renewable cellulose as stabilizing particle. Confocal microscopy and high speed video imaging show that the long-term stability of the wet foams can be attributed to the octylamine-coated, rod-shaped NFC nanoparticles residing at the air-liquid interface which prevent the air bubbles from collapsing or coalescing. This can be achieved at solids content around 1 % by weight. Careful removal of the water results in a cellulose-based material with a porosity of 98 % and a density of 30 mg cm-3. These porous cellulose materials have a higher Young’s modulus than other cellulose materials made by freeze drying and a compressive energy absorption of 56 kJ m-3 at 80 % strain. Measurement with the aid of an autoporosimeter revealed that most pores are in the range of 300 to 500 μm.
  •  
31.
  • Diwakarla, Shanti, et al. (författare)
  • Binding to and Inhibition of Insulin-Regulated Aminopeptidase (IRAP) by Macrocyclic Disulfides Enhances Spine Density
  • 2016
  • Ingår i: Molecular Pharmacology. - : American Society for Pharmacology & Experimental Therapeutics (ASPET). - 0026-895X .- 1521-0111. ; 89:4, s. 413-424
  • Tidskriftsartikel (refereegranskat)abstract
    • Angiotensin IV (Ang IV) and related peptide analogues, as well as non-peptide inhibitors of insulin-regulated aminopeptidase (IRAP), have previously been shown to enhance memory and cognition in animal models. Furthermore, the endogenous IRAP substrates oxytocin and vasopressin are known to facilitate learning and memory. In this study, the two recently synthesized 13-membered macrocylic competitive IRAP inhibitors HA08 and HA09, which were designed to mimic the N-terminal of oxytocin and vasopressin, were assessed and compared based on their ability to bind to the IRAP active site, and alter dendritic spine density in rat hippocampal primary cultures. The binding modes of the IRAP inhibitors HA08, HA09 and of Ang IV in either the extended or γ-turn conformation at the C-terminal to human IRAP were predicted by docking and molecular dynamics (MD) simulations. The binding free energies calculated with the linear interaction energy (LIE) method, which are in excellent agreement with experimental data and simulations, have been used to explain the differences in activities of the IRAP inhibitors, both of which are structurally very similar, but differ only with regard to one stereogenic center. In addition, we show that HA08, which is 100-fold more potent than the epimer HA09, can enhance dendritic spine number and alter morphology, a process associated with memory facilitation. Therefore, HA08, one of the most potent IRAP inhibitors known today, may serve as a suitable starting point for medicinal chemistry programs aided by MD simulations aimed at discovering more drug-like cognitive enhancers acting via augmenting synaptic plasticity.
  •  
32.
  •  
33.
  • Havervall, Sebastian, et al. (författare)
  • Robust humoral and cellular immune responses and low risk for reinfection at least 8 months following asymptomatic to mild COVID-19
  • 2022
  • Ingår i: Journal of Internal Medicine. - : John Wiley & Sons. - 0954-6820 .- 1365-2796. ; 291:1, s. 72-80
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Emerging data support detectable immune responses for months after severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection and vaccination, but it is not yet established to what degree and for how long protection against reinfection lasts.Methods: We investigated SARS-CoV-2-specific humoral and cellular immune responses more than 8 months post-asymptomatic, mild and severe infection in a cohort of 1884 healthcare workers (HCW) and 51 hospitalized COVID-19 patients. Possible protection against SARS-CoV-2 reinfection was analyzed by a weekly 3-month polymerase chain reaction (PCR) screening of 252 HCW that had seroconverted 7 months prior to start of screening and 48 HCW that had remained seronegative at multiple time points.Results: All COVID-19 patients and 96% (355/370) of HCW who were anti-spike IgG positive at inclusion remained anti-spike IgG positive at the 8-month follow-up. Circulating SARS-CoV-2-specific memory T cell responses were detected in 88% (45/51) of COVID-19 patients and in 63% (233/370) of seropositive HCW. The cumulative incidence of PCR-confirmed SARS-CoV-2 infection was 1% (3/252) among anti-spike IgG positive HCW (0.13 cases per 100 weeks at risk) compared to 23% (11/48) among anti-spike IgG negative HCW (2.78 cases per 100 weeks at risk), resulting in a protective effect of 95.2% (95% CI 81.9%-99.1%).Conclusions: The vast majority of anti-spike IgG positive individuals remain anti-spike IgG positive for at least 8 months regardless of initial COVID-19 disease severity. The presence of anti-spike IgG antibodies is associated with a substantially reduced risk of reinfection up to 9 months following asymptomatic to mild COVID-19.
  •  
34.
  • Havervall, Sebastian, et al. (författare)
  • SARS-CoV-2 induces a durable and antigen specific humoral immunity after asymptomatic to mild COVID-19 infection
  • 2022
  • Ingår i: PLOS ONE. - : Public Library of Science (PLoS). - 1932-6203. ; 17:1, s. e0262169-e0262169
  • Tidskriftsartikel (refereegranskat)abstract
    • Current SARS-CoV-2 serological assays generate discrepant results, and the longitudinal characteristics of antibodies targeting various antigens after asymptomatic to mild COVID-19 are yet to be established. This longitudinal cohort study including 1965 healthcare workers, of which 381 participants exhibited antibodies against the SARS-CoV-2 spike antigen at study inclusion, reveal that these antibodies remain detectable in most participants, 96%, at least four months post infection, despite having had no or mild symptoms. Virus neutralization capacity was confirmed by microneutralization assay in 91% of study participants at least four months post infection. Contrary to antibodies targeting the spike protein, antibodies against the nucleocapsid protein were only detected in 80% of previously anti-nucleocapsid IgG positive healthcare workers. Both anti-spike and anti-nucleocapsid IgG levels were significantly higher in previously hospitalized COVID-19 patients four months post infection than in healthcare workers four months post infection (p = 2*10−23 and 2*10−13 respectively). Although the magnitude of humoral response was associated with disease severity, our findings support a durable and functional humoral response after SARS-CoV-2 infection even after no or mild symptoms. We further demonstrate differences in antibody kinetics depending on the antigen, arguing against the use of the nucleocapsid protein as target antigen in population-based SARS-CoV-2 serological surveys
  •  
35.
  •  
36.
  • Hossain, Mosharraf, 1984-, et al. (författare)
  • Integrated Modeling and Application of Standardized Data Schema
  • 2012
  • Ingår i: Proceedings of the 5th Swedish Production Symposium (SPS 12). - Linköping : The Swedish Production Academy. - 9789175197524 ; , s. 473-478
  • Konferensbidrag (refereegranskat)abstract
    • Application of discrete event simulation has been widely accepted in the manufacturing industry forshop floor capacity and flow analysis. Often the models are made to analyse a limited aspects ofthe system behaviour that abstract only few factors for the sake of simplicity in the expense ofmodel accuracy. One of the prohibitive reasons for a detailed and multi objective model is thecollection and management of the input output simulation data.In this work an integrated and detailed simulation model of a machining line has been built for thepurpose of a multi objective analysis and optimization with application of thestandard simulation data schema for data management. The model incorporates detailed capacityand flow related factors as well as sustainability factors (energy usage in machine tools), Theabstraction level of the model is sufficiently low to capture the effect of operators’ movements inthe shop floor and setup procedures. To support modeling of such a detailed model enhancemodel reusability, the Core Manufacturing Simulation Data standard XML schema was used. Thisneutral format of data will also help to build model with less effort, reuse the information, andcommunicate with different application tools. A case study is carried out to illustrate the methodsand run a multi objective optimization. The results of the optimization work are reported in anotherpaper.
  •  
37.
  •  
38.
  •  
39.
  • Joffrin, E., et al. (författare)
  • Overview of the JET preparation for deuterium-tritium operation with the ITER like-wall
  • 2019
  • Ingår i: Nuclear Fusion. - : IOP Publishing. - 1741-4326 .- 0029-5515. ; 59:11
  • Forskningsöversikt (refereegranskat)abstract
    • For the past several years, the JET scientific programme (Pamela et al 2007 Fusion Eng. Des. 82 590) has been engaged in a multi-campaign effort, including experiments in D, H and T, leading up to 2020 and the first experiments with 50%/50% D-T mixtures since 1997 and the first ever D-T plasmas with the ITER mix of plasma-facing component materials. For this purpose, a concerted physics and technology programme was launched with a view to prepare the D-T campaign (DTE2). This paper addresses the key elements developed by the JET programme directly contributing to the D-T preparation. This intense preparation includes the review of the physics basis for the D-T operational scenarios, including the fusion power predictions through first principle and integrated modelling, and the impact of isotopes in the operation and physics of D-T plasmas (thermal and particle transport, high confinement mode (H-mode) access, Be and W erosion, fuel recovery, etc). This effort also requires improving several aspects of plasma operation for DTE2, such as real time control schemes, heat load control, disruption avoidance and a mitigation system (including the installation of a new shattered pellet injector), novel ion cyclotron resonance heating schemes (such as the three-ions scheme), new diagnostics (neutron camera and spectrometer, active Alfven eigenmode antennas, neutral gauges, radiation hard imaging systems...) and the calibration of the JET neutron diagnostics at 14 MeV for accurate fusion power measurement. The active preparation of JET for the 2020 D-T campaign provides an incomparable source of information and a basis for the future D-T operation of ITER, and it is also foreseen that a large number of key physics issues will be addressed in support of burning plasmas.
  •  
40.
  • Kamae, Tuneyoshi, et al. (författare)
  • PoGOLite - A high sensitivity balloon-borne soft gamma-ray polarimeter
  • 2008
  • Ingår i: Astroparticle physics. - : Elsevier BV. - 0927-6505 .- 1873-2852. ; 30:2, s. 72-84
  • Tidskriftsartikel (refereegranskat)abstract
    • We describe a new balloon-borne instrument (PoGOLite) capable of detecting 10% polarisation from 200 mCrab point-like sources between 25 and 80 keV in one 6-h flight. Polarisation measurements in the soft gamma-ray band are expected to provide a powerful probe into high energy emission mechanisms as well as the distribution of magnetic fields, radiation fields and interstellar matter. Synchrotron radiation, inverse Compton scattering and propagation through high magnetic fields are likely to produce high degrees of polarisation in the energy band of the instrument. We demonstrate, through tests at accelerators, with radioactive sources and through computer simulations, that PoGOLite will be able to detect degrees of polarisation as predicted by models for several classes of high energy sources. At present, only exploratory polarisation measurements have been carried out in the soft gamma-ray band. Reduction of the large background produced by cosmic-ray particles while securing a large effective area has been the greatest challenge. PoGOLite uses Compton scattering and photo-absorption in an array of 217 well-type phoswich detector cells made of plastic and BGO scintillators surrounded by a BGO anticoincidence shield and a thick polyethylene neutron shield. The narrow Held of view (FWHM = 1.25 msr, 2.0 deg x 2.0 deg) obtained with detector cells and the use of thick background shields warrant a large effective area for polarisation measurements (similar to 228 cm(2) at E = 40 keV) without sacrificing the signal-to-noise ratio. Simulation studies for an atmospheric overburden of 3-4 g/cm(2) indicate that neutrons and gamma-rays entering the PDC assembly through the shields are dominant backgrounds. Off-line event selection based on recorded phototube waveforms and Compton kinematics reduce the background to that expected for a similar to 100 mCrab source between 25 and 50 keV. A 6-h observation of the Crab pulsar will differentiate between the Polar Cap/Slot Gap, Outer Gap, and Caustic models with greater than 5 sigma significance; and also cleanly identify the Compton reflection component in the Cygnus X-1 hard state. Long-duration flights will measure the dependence of the polarisation across the cyclotron absorption line in Hercules X-1. A scaled-down instrument will be flown as a pathfinder mission from the north of Sweden in 2010. The first science flight is planned to take place shortly thereafter. 
  •  
41.
  • Karpestam, Peter, et al. (författare)
  • Economic perspectives on migration
  • 2019. - 2
  • Ingår i: Routledge International Handbook of Migration Studies. - Second Edition. | New York : Routledge, 2019. | Series: Routledge International Handbooks : Routledge. - 9781138208827 - 9781315458298 ; , s. 3-18
  • Bokkapitel (refereegranskat)abstract
    • The economic literature on migration has a strong focus on labor migration. It typically distinguishes between migration within countries and between countries and it focuses on the determinants of migration rather than their consequences. Motives and consequences of migration are difficult to separate. We explore the policy implications and empirical support of six common theories. We distinguish between theories of (1) the initiating causes of migration and (2) the self-perpetuating causes of migration, i.e. how current migration flows can cause future migration flows. Our discussion reveals the complexity of motives to migrate, which highlights the necessity of viewing different theories as complementary rather than contradictory. For instance, the new economics of labor migration complements neoclassical theories by emphasizing that households rather than individuals often make migration decisions and by dropping the assumption that individuals are risk-neutral. Further, macroeconomic theories complement microeconomic theories by highlighting that basic structural characteristics of the economy, such as segmented labor markets and scarcity of land, can induce migration flows. Finally, we show that when evaluating different theories empirically, there is support for different theories at different time horizons (the short run, medium run and long run), a supposition commonly ignored in the empirical literature.
  •  
42.
  •  
43.
  • Mansouri, Kamel, et al. (författare)
  • CERAPP : Collaborative Estrogen Receptor Activity Prediction Project
  • 2016
  • Ingår i: Journal of Environmental Health Perspectives. - : Environmental Health Perspectives. - 0091-6765 .- 1552-9924. ; 124:7, s. 1023-1033
  • Tidskriftsartikel (refereegranskat)abstract
    • BACKGROUND: Humans are exposed to thousands of man-made chemicals in the environment. Some chemicals mimic natural endocrine hormones and, thus, have the potential to be endocrine disruptors. Most of these chemicals have never been tested for their ability to interact with the estrogen receptor (ER). Risk assessors need tools to prioritize chemicals for evaluation in costly in vivo tests, for instance, within the U.S. EPA Endocrine Disruptor Screening Program. OBJECTIVES: We describe a large-scale modeling project called CERAPP (Collaborative Estrogen Receptor Activity Prediction Project) and demonstrate the efficacy of using predictive computational models trained on high-throughput screening data to evaluate thousands of chemicals for ER-related activity and prioritize them for further testing. METHODS: CERAPP combined multiple models developed in collaboration with 17 groups in the United States and Europe to predict ER activity of a common set of 32,464 chemical structures. Quantitative structure-activity relationship models and docking approaches were employed, mostly using a common training set of 1,677 chemical structures provided by the U.S. EPA, to build a total of 40 categorical and 8 continuous models for binding, agonist, and antagonist ER activity. All predictions were evaluated on a set of 7,522 chemicals curated from the literature. To overcome the limitations of single models, a consensus was built by weighting models on scores based on their evaluated accuracies. RESULTS: Individual model scores ranged from 0.69 to 0.85, showing high prediction reliabilities. Out of the 32,464 chemicals, the consensus model predicted 4,001 chemicals (12.3%) as high priority actives and 6,742 potential actives (20.8%) to be considered for further testing.CONCLUSION: This project demonstrated the possibility to screen large libraries of chemicals using a consensus of different in silico approaches. This concept will be applied in future projects related to other end points.
  •  
44.
  •  
45.
  • Murari, A., et al. (författare)
  • A control oriented strategy of disruption prediction to avoid the configuration collapse of tokamak reactors
  • 2024
  • Ingår i: Nature Communications. - 2041-1723 .- 2041-1723. ; 15:1
  • Tidskriftsartikel (refereegranskat)abstract
    • The objective of thermonuclear fusion consists of producing electricity from the coalescence of light nuclei in high temperature plasmas. The most promising route to fusion envisages the confinement of such plasmas with magnetic fields, whose most studied configuration is the tokamak. Disruptions are catastrophic collapses affecting all tokamak devices and one of the main potential showstoppers on the route to a commercial reactor. In this work we report how, deploying innovative analysis methods on thousands of JET experiments covering the isotopic compositions from hydrogen to full tritium and including the major D-T campaign, the nature of the various forms of collapse is investigated in all phases of the discharges. An original approach to proximity detection has been developed, which allows determining both the probability of and the time interval remaining before an incoming disruption, with adaptive, from scratch, real time compatible techniques. The results indicate that physics based prediction and control tools can be developed, to deploy realistic strategies of disruption avoidance and prevention, meeting the requirements of the next generation of devices.
  •  
46.
  •  
47.
  • Ng, Amos H. C., 1970-, et al. (författare)
  • Aircraft Assembly Ramp-Up Planning Using a Hybrid Simulation-Optimization Approach
  • 2020
  • Ingår i: Proceedings - Winter Simulation Conference. - : IEEE. - 9781728194998 - 9781728195001 ; , s. 3045-3056
  • Konferensbidrag (refereegranskat)abstract
    • Assembly processes have the most influencing and long-term impact on the production volume and cost in the aerospace industry. One of the most crucial factors in aircraft assembly lines design during the conceptual design phase is ramp-up planning that synchronizes the production rates at the globally dispersed facilities. Inspired by a pilot study performed with an aerospace company, this paper introduces a hybrid simulation-optimization approach for addressing an assembly production chain ramp-up problem that takes into account: (1) the interdependencies of the ramp-up profiles between final assembly lines and its upstream lines; (2) workforce planning with various learning curves; (3) inter-plant buffer and lead-time optimization, in the problem formulation. The approach supports the optimization of the ramp-up profile that minimizes the times the aircraft assemblies stay in the buffers and simultaneously attains zero backlog. It also generates the required simulation-optimization data for supporting the decision-making activities in the industrialization projects. 
  •  
48.
  • Ng, Amos, et al. (författare)
  • OPTIMISE : An Internet-Based Platform for Metamodel-Assisted Simulation Optimization
  • 2008
  • Ingår i: Advances in Communication Systems and Electrical Engineering. - Boston, MA : Springer Science+Business Media B.V.. - 9780387749372 - 9780387749389 ; , s. 281-296
  • Bokkapitel (refereegranskat)abstract
    • Computer simulation has been described as the most effective tool for de-signing and analyzing systems in general and discrete-event systems (e.g., production or logistic systems) in particular (De Vin et al. 2004). Historically, the main disadvantage of simulation is that it was not a real optimization tool. Recently, research efforts have been focused on integrating metaheuristic algorithms, such as genetic algorithms (GA) with simulation software so that “optimal” or close to optimal solutions can be found automatically. An optimal solution here means the setting of a set of controllable design variables (also known as decision variables) that can minimize or maximize an objective function. This approach is called simulation optimization or simulation-based optimization (SBO), which is perhaps the most important new simulation technology in the last few years (Law and McComas 2002). In contrast to other optimization problems, it is assumed that the objective function in an SBO problem cannot be evaluated analytically but have to be estimated through deterministic/ stochastic simulation.
  •  
49.
  • Ng, Amos, et al. (författare)
  • Web Services for Metamodel-Assisted Parallel Simulation Optimization
  • 2007
  • Ingår i: IMECS 2007. - : International Association of Engineers. - 9789889867140 ; , s. 879-885
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents the OPTIMISE platform currently developed in the research project OPTIMIST. The aim of OPTIMISE is to facilitate research on metamodel-assisted simulation optimisation using soft computing techniques by providing a platform for the development and evaluation of new algorithms.
  •  
50.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 67
Typ av publikation
tidskriftsartikel (30)
konferensbidrag (23)
annan publikation (4)
forskningsöversikt (4)
doktorsavhandling (2)
bokkapitel (2)
visa fler...
visa färre...
Typ av innehåll
refereegranskat (52)
övrigt vetenskapligt/konstnärligt (13)
Författare/redaktör
Jones, G. (8)
Garcia, J. (8)
Duran, I (8)
Li, Y. (7)
Nowak, S. (7)
Bowden, M. (7)
visa fler...
Mayer, M. (7)
Belli, F. (7)
Krieger, K. (7)
Airila, M (7)
Albanese, R (7)
Alper, B (7)
Ambrosino, G (7)
Angioni, C (7)
Ariola, M (7)
Ash, A (7)
Avotina, L (7)
Baciero, A (7)
Balboa, I (7)
Balden, M (7)
Balshaw, N (7)
Barnsley, R (7)
Baruzzo, M (7)
Batistoni, P (7)
Baylor, L (7)
Bekris, N (7)
Beldishevski, M (7)
Bernardo, J (7)
Bernert, M (7)
Bilkova, P (7)
Blanchard, P (7)
Bobkov, V (7)
Boboc, A (7)
Bolshakova, I (7)
Bolzonella, T (7)
Bonnin, X (7)
Boulbe, C (7)
Bourdelle, C (7)
Braic, V (7)
Brett, A (7)
Brezinsek, S (7)
Brix, M (7)
Buratti, P (7)
Cannas, B (7)
Cardinali, A (7)
Carman, P (7)
Carralero, D (7)
Carraro, L (7)
Carvalho, I (7)
Carvalho, P (7)
visa färre...
Lärosäte
Högskolan i Skövde (27)
Karolinska Institutet (23)
Kungliga Tekniska Högskolan (15)
Uppsala universitet (12)
Chalmers tekniska högskola (7)
Umeå universitet (4)
visa fler...
Lunds universitet (4)
Stockholms universitet (2)
Göteborgs universitet (1)
Malmö universitet (1)
Linnéuniversitetet (1)
visa färre...
Språk
Engelska (67)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (26)
Teknik (13)
Medicin och hälsovetenskap (8)
Samhällsvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy