SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Vanhatalo Erik) "

Sökning: WFRF:(Vanhatalo Erik)

  • Resultat 1-41 av 41
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Bergquist, Bjarne, et al. (författare)
  • A Bayesian analysis of unreplicated two-level factorials using effects sparsity, hierarchy, and heredity
  • 2011
  • Ingår i: Quality Engineering. - : Informa UK Limited. - 0898-2112 .- 1532-4222. ; 23:2, s. 152-166
  • Tidskriftsartikel (refereegranskat)abstract
    • This article proposes a Bayesian procedure to calculate posterior probabilities of active effects for unreplicated two-level factorials. The results from a literature survey are used to specify individual prior probabilities for the activity of effects and the posterior probabilities are then calculated in a three-step procedure where the principles of effects sparsity, hierarchy, and heredity are successively considered. We illustrate our approach by reanalyzing experiments found in the literature.
  •  
2.
  • Bergquist, Bjarne, et al. (författare)
  • Alive and kicking–but will Quality Management be around tomorrow? : A Swedish academia perspective
  • 2012
  • Konferensbidrag (refereegranskat)abstract
    • Purpose: There is a lack of a recognized conception of quality management (QM) comprises of, as well as a clear roadmap of where QM is heading. The purpose of this article is to investigate how QM is perceived today by scholars at three Swedish universities, but also how and into what QM is expected to develop into in twenty years.Methodology: Data have been collected through three structured workshops using affinity diagrams with scholars teaching and performing research in the QM field affiliated with three different Swedish universities.Findings: The results indicate that current QM is perceived similarly among the universities today, although the taxonomy differs slightly. QM is described as a fairly wide discipline consisting of a set of core of principles that in turn guide which methods and tools that currently by many are perceived as the core of the discipline. The outlook for the future differs more where three possible development directions for QM are seen: [1] searching for a “discipline X” where QM can contribute while keeping its toolbox, [2] focus on a core based on the traditional quality technology toolbox with methods and tools, and [3] a risk that QM, as it is today, may seize to exist and be diffused into other disciplines. Originality/value: This article contributes with a viewpoint on QM today and its future development from the academicians’ perspective.
  •  
3.
  • Bergquist, Bjarne, et al. (författare)
  • Alive and kicking–but will Quality Management be around tomorrow? A Swedish academia perspective
  • 2012
  • Ingår i: Quality Innovation Prosperity. - : Technical University of Kosice, Faculty of Materials, Metallurgy and Recycling. - 1335-1745 .- 1338-984X. ; 16:2, s. 1-18
  • Tidskriftsartikel (refereegranskat)abstract
    • The purpose of this article is to describe how Quality Management (QM) is perceived today by scholars at three Swedish universities, and into what QM is expected to develop into in twenty years. Data were collected through structured workshops using affinity diagrams with scholars teaching and performing research in the QM field. The results show that QM currently is perceived as consisting of a set of core of principles, methods and tools. The future outlook includes three possible development directions for QM are seen: [1] searching for a “discipline X” where QM can contribute while keeping its toolbox, [2] focus on a core based on the traditional quality technology toolbox with methods and tools, and [3] a risk that QM, as it is today, may seize to exist and be diffused into other disciplines.
  •  
4.
  • Bergquist, Bjarne, et al. (författare)
  • Cleaning of Railway Track Measurement Data forBetter Maintenance Decisions
  • 2019
  • Ingår i: Proceedings of the 5<sup>th</sup> International Workshop and Congress on eMaintenance. - : Luleå University of Technology. ; , s. 9-15
  • Konferensbidrag (refereegranskat)abstract
    • Data of sufficient quality, quantity and validity constitute a sometimes overlooked basis for eMaintenance. Missing data, heterogeneous data types, calibration problems, or non-standard distributions are common issues of operation and maintenance data. Railway track geometry data used for maintenance planning exhibit all the above issues. They also have unique features stemming from their collection by measurement cars running along the railway network. As the track is a linear asset, measured geometry data need to be precisely located to be useful. However, since the sensors on the measurement car are moving along the track, the observations’ geographical sampling positions come with uncertainty. Another issue is that different seasons and othertime restrictions (e.g. related to the timetable) prohibit regular sampling. Hence, prognostics related to remaining useful life (RUL) are challenging since most forecasting methods require a fixed sampling frequency.This paper discusses methods for data cleaning, data condensation and data extraction from large datasets collected by measurement cars. We discuss missing data replacement, dealing with autocorrelation or cross-correlation, and consequences of not fulfilling methodological pre-conditions such as estimating probabilities of failures using data that do not follow the assumed distributions or data that are dependent. We also discuss outlier detection, dealing with data coming from multiple distributions, of unknown calibrations and other issues seen in railway track geometry data. We also discuss the consequences of not addressing or mishandling quality issues of such data. 
  •  
5.
  • Bergquist, Bjarne, et al. (författare)
  • In-situ measurement in the iron ore pellet distribution chain using active RFID technology
  • 2020
  • Ingår i: Powder Technology. - : Elsevier. - 0032-5910 .- 1873-328X. ; 361, s. 791-802
  • Tidskriftsartikel (refereegranskat)abstract
    • The active radio frequency identification (RFID) technique is used for in-situ measurement of acceleration and temperature in the distribution chain of iron ore pellets. The results of this paper are based on two experiments, in which active RFID transponders were released into train wagons or product bins. RFID exciters and readers were installed downstream in a harbour storage silo to retrieve data from the active transponders. Acceleration peaks and temperatures were recorded. The results imply that in-situ data can aid the understanding of induced stresses along the distribution chain to, for example, reduce pellet breakage and dusting. In-situ data can also increase understanding of product mixing behaviour and product residence times in silos. Better knowledge of stresses, product mixing and residence times are beneficial to process and product quality improvement, to better understand the transportation process, and to reduce environmental impacts due to dusting.
  •  
6.
  • Bergquist, Bjarne, et al. (författare)
  • Power Analysis of Methods for Analysing Unreplicated Factorial Experiments
  • 2013
  • Konferensbidrag (refereegranskat)abstract
    • Several methods for formal analysis of unreplicated factorial type experiments have been proposed in the literature. Based on a simulation study, five formal methods found in the literature based on the effect sparsity principle have been studied. The simulation included 23 and 24 type factorials with one, two, or four active effects. The simulated signal-to-noise ratios for the effects were all between two and four, and the Type I and Type II errors of the analysis methods were analysed. Preliminary results show that Bayesian models are more powerful in these contexts, especially if informative priors based on the effect heredity and effect hierarchy principles are used.
  •  
7.
  • Capaci, Francesca, et al. (författare)
  • A two-step procedure for fault detection in the Tennessee Eastman Process simulator
  • 2016
  • Konferensbidrag (refereegranskat)abstract
    • High-technological and complex production processes and high availability and sample frequencies of data in large scale industrial processes need the concurrent development of appropriate statistical control tools and monitoring techniques. Therefore, multivariate control charts based on latent variables are essential tools to detect and isolate process faults.Several Statistical Process Control (SPC) charts have been developed for multivariate and megavariate data, such as the Hotelling T2, MCUSUM and MEWMA control charts as well as charts based on principal component analysis (PCA) and dynamic PCA (DPCA). The ability of SPC procedures based on PCA (Kourti, MacGregor 1995) or DPCA (Ku et al. 1995) to detect and isolate process disturbances for a large number of highly correlated (and time-dependent in the case of DPCA) variables has been demonstrated in the literature. However, we argue that the fault isolation capability and the fault detection rate for processes can be improved further for processes operating under feedback control loops (in closed loop).The purpose of this presentation is to illustrate a two-step method where [1] the variables are pre-classified prior to the analysis and [2] the monitoring scheme based on latent variables is implemented. Step 1 involves a structured qualitative classification of the variables to guide the choice of which variables to monitor in Step 2. We argue that the proposed method will be useful for many practitioners of SPC based on latent variables techniques in processes operating in closed loop. It will allow clearer fault isolation and detection and an easier implementation of corrective actions. A case study based on the data available from the Tennessee Eastman Process simulator under feedback control loops (Matlab) will be presented. The results from the proposed method are compared with currently available methods through simulations in R statistics software.
  •  
8.
  • Capaci, Francesca (författare)
  • Adapting Experimental and Monitoring Methods for Continuous Processes under Feedback Control : Challenges, Examples, and Tools
  • 2019
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Continuous production covers a significant part of today’s industrial manufacturing. Consumer goods purchased on a frequent basis, such as food, drugs, and cosmetics, and capital goods such as iron, chemicals, oil, and ore come through continuous processes. Statistical process control (SPC) and design of experiments (DoE) play important roles as quality control and product and process improvement methods. SPC reduces product and process variation by eliminating assignable causes, while DoE shows how products and processes may be improved through systematic experimentation and analysis. Special issues emerge when applying these methods to continuous process settings, such as the need to simultaneously analyze massive time series of autocorrelated and cross-correlated data. Another important characteristic of most continuous processes is that they operate under engineering process control (EPC), as in the case of feedback controllers. Feedback controllers transform processes into closed-loop systems and thereby increase the process and analysis complexity and application of SPC and DoE methods that need to be adapted accordingly. For example, the quality characteristics or process variables to be monitored in a control chart or the experimental factors in an experiment need to be chosen considering the presence of feedback controllers.The main objective of this thesis is to suggest adapted strategies for applying experimental and monitoring methods (namely, DoE and SPC) to continuous processes under feedback control. Specifically, this research aims to [1] identify, explore, and describe the potential challenges when applying SPC and DoE to continuous processes; [2] propose and illustrate new or adapted SPC and DoE methods to address some of the issues raised by the presence of feedback controllers; and [3] suggest potential simulation tools that may be instrumental in SPC and DoE methods development.The results are summarized in five appended papers. Through a literature review, Paper A outlines the SPC and DoE implementation challenges for managers, researchers, and practitioners. For example, the problems due to process transitions, the multivariate nature of data, serial correlation, and the presence of EPC are discussed. Paper B describes the issues and potential strategies in designing and analyzing experiments on processes operating under closed- loop control. Two simulated examples in the Tennessee Eastman (TE) process simulator show the benefits of using DoE methods to improve these industrial processes. Paper C provides guidelines on how to use the revised TE process simulator under a decentralized control strategy as a testbed for SPC and DoE methods development in continuous processes. Papers D and E discuss the concurrent use of SPC in processes under feedback control. Paper D further illustrates how step and ramp disturbances manifest themselves in single-input single-output processes controlled by variations in the proportional-integral-derivative control and discusses the implications for process monitoring. Paper E describes a two-step monitoring procedure for multivariate processes and explains the process and controller performance when out-of-controlprocess conditions occur.
  •  
9.
  • Capaci, Francesca (författare)
  • Contributions to the Use of Statistical Methods for Improving Continuous Production
  • 2017
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Complexity of production processes, high computing capabilities, and massive datasets characterize today’s manufacturing environments, such as those of continuous andbatch production industries. Continuous production has spread gradually acrossdifferent industries, covering a significant part of today’s production. Commonconsumer goods such as food, drugs, and cosmetics, and industrial goods such as iron,chemicals, oil, and ore come from continuous processes. To stay competitive intoday’s market requires constant process improvements in terms of both effectivenessand efficiency. Statistical process control (SPC) and design of experiments (DoE)techniques can play an important role in this improvement strategy. SPC attempts toreduce process variation by eliminating assignable causes, while DoE is used toimprove products and processes by systematic experimentation and analysis. However,special issues emerge when applying these methods in continuous process settings.Highly automated and computerized processes provide an exorbitant amount ofserially dependent and cross-correlated data, which may be difficult to analyzesimultaneously. Time series data, transition times, and closed-loop operation areexamples of additional challenges that the analyst faces.The overall objective of this thesis is to contribute to using of statisticalmethods, namely SPC and DoE methods, to improve continuous production.Specifically, this research serves two aims: [1] to explore, identify, and outlinepotential challenges when applying SPC and DoE in continuous processes, and [2] topropose simulation tools and new or adapted methods to overcome the identifiedchallenges.The results are summarized in three appended papers. Through a literaturereview, Paper A outlines SPC and DoE implementation challenges for managers,researchers, and practitioners. For example, problems due to process transitions, themultivariate nature of data, serial correlation, and the presence of engineering processcontrol (EPC) are discussed. Paper B further explores one of the DoE challengesidentified in Paper A. Specifically, Paper B describes issues and potential strategieswhen designing and analyzing experiments in processes operating under closed-loopcontrol. Two simulated examples in the Tennessee Eastman (TE) process simulatorshow the benefits of using DoE techniques to improve and optimize such industrialprocesses. Finally, Paper C provides guidelines, using flow charts, on how to use thecontinuous process simulator, “The revised TE process simulator,” run with adecentralized control strategy as a test bed for developing SPC and DoE methods incontinuous processes. Simulated SPC and DoE examples are also discussed.
  •  
10.
  • Capaci, Francesca, et al. (författare)
  • Exploring the Use of Design of Experiments in Industrial Processes Operating Under Closed-Loop Control
  • 2017
  • Ingår i: Quality and Reliability Engineering International. - : John Wiley & Sons. - 0748-8017 .- 1099-1638. ; 33:7, s. 1601-1614
  • Tidskriftsartikel (refereegranskat)abstract
    • Industrial manufacturing processes often operate under closed-loop control, where automation aims to keep important process variables at their set-points. In process industries such as pulp, paper, chemical and steel plants, it is often hard to find production processes operating in open loop. Instead, closed-loop control systems will actively attempt to minimize the impact of process disturbances. However, we argue that an implicit assumption in most experimental investigations is that the studied system is open loop, allowing the experimental factors to freely affect the important system responses. This scenario is typically not found in process industries. The purpose of this article is therefore to explore issues of experimental design and analysis in processes operating under closed-loop control and to illustrate how Design of Experiments can help in improving and optimizing such processes. The Tennessee Eastman challenge process simulator is used as a test-bed to highlight two experimental scenarios. The first scenario explores the impact of experimental factors that may be considered as disturbances in the closed-loop system. The second scenario exemplifies a screening design using the set-points of controllers as experimental factors. We provide examples of how to analyze the two scenarios
  •  
11.
  • Capaci, Francesca, et al. (författare)
  • Managerial implications for improvingcontinuous production processes
  • 2017
  • Konferensbidrag (refereegranskat)abstract
    • Data analytics remains essential for process improvement and optimization. Statistical process control and design of experiments are among the most powerful process and product improvement methods available. However, continuous process environments challenge the application of these methods. In this article, we highlight SPC and DoE implementation challenges described in the literature for managers, researchers and practitioners interested in continuous production process improvement. The results may help managers support the implementation of these methods and make researchers and practitioners aware of methodological challenges in continuous process environments.
  •  
12.
  • Capaci, Francesca, et al. (författare)
  • On Monitoring Industrial Processes under Feedback Control
  • 2020
  • Ingår i: Quality and Reliability Engineering International. - : John Wiley & Sons. - 0748-8017 .- 1099-1638. ; 36:8, s. 2720-2737
  • Tidskriftsartikel (refereegranskat)abstract
    • The concurrent use of statistical process control and engineering process con-trol involves monitoring manipulated and controlled variables. One multivari-ate control chart may handle the statistical monitoring of all variables, butobserving the manipulated and controlled variables in separate control chartsmay improve understanding of how disturbances and the controller perfor-mance affect the process. In this article, we illustrate how step and ramp dis-turbances manifest themselves in a single-input–single-output system bystudying their resulting signatures in the controlled and manipulated variables.The system is controlled by variations of the widely used proportional-integral-derivative(PID) control scheme. Implications for applying control charts forthese scenarios are discussed.
  •  
13.
  • Capaci, Francesca, et al. (författare)
  • Simulating and Analyzing Experiments in the Tennessee Eastman Process Simulator
  • 2015
  • Ingår i: ENBIS-15.
  • Konferensbidrag (refereegranskat)abstract
    • In many of today’s continuous processes, the data collection is usually performed automatically yielding exorbitant amount of data on various quality characteristics and inputs to the system. Moreover, such data are usually collected at high frequency introducing significant serial dependence in time. This violates the independent data assumption of many industrial statistics methods used in process improvement studies. These studies often involve controlled experiments to unearth the causal relationships to be used for robustness and optimization purposes.However real production processes are not suitable for studying new experimental methodologies, partly because unknown disturbances/experimental settings may lead to erroneous conclusions. Moreover large scale experimentation in production processes is frowned upon due to consequent disturbances and production delays. Hence realistic simulation of such processes offers an excellent opportunity for experimentation and methodological development.One commonly used process simulator is the Tennessee Eastman (TE) challenge chemical process simulator (Downs & Vogel, 1993)[1]. The process produces two products from four reactants, containing 41 measured variables and 12 manipulated variables. In addition to the process description, the problem statement defines process constraints, 20 types of process disturbances, and six operating modes corresponding to different production rates and mass ratios in the product stream.The purpose of this paper is to illustrate the use of the TE process with an appropriate feedback control as a test-bed for the methodological developments of new experimental design and analysis techniques.The paper illustrates how two-level experimental designs can be used to identify how the input factors affect the outputs in a chemical process.Simulations using Matlab/Simulink software are used to study the impact of e.g. process disturbances, closed loop control and autocorrelated data on different experimental arrangements.The experiments are analysed using a time series analysis approach to identify input-output relationships in a process operating in closed-loop with multivariate responses. The dynamics of the process are explored and the necessary run lengths for stable effect estimates are discussed.
  •  
14.
  • Capaci, Francesca, et al. (författare)
  • Simulating Experiments in Closed-Loop Control Systems
  • 2016
  • Ingår i: ENBIS-16 in Sheffield.
  • Konferensbidrag (refereegranskat)abstract
    • Design of Experiments (DoE) literature extensively discusses how to properly plan, conduct and analyze experiments for process and product improvement. However, it is typically assumed that the experiments are run on processes operating in open-loop: the changes in experimental factors are directly visible in process responses and are not hidden by (automatic) feedback control. Under this assumption, DoE methods have been successfully applied in process industries such as chemical, pharmaceutical and biological industries.However, the increasing instrumentation, automation and interconnectedness are changing how the processes are run. Processes often involve engineering process control as in the case of closed-loop systems. The closed-loop environment adds complexity to experimentation and analysis since the experimenter must account for the control actions that may aim to keep a response variable at its set-point value.  The common approach to experimental design and analysis will likely need adjustments in the presence of closed-loop controls. Careful consideration is for instance needed when the experimental factors are chosen. Moreover, the impact of the experimental factors may not be directly visible as changes in the response variables (Hild, Sanders, & Cooper, 2001). Instead other variables may need to be used as proxies for the intended response variable(s).The purpose of this presentation is to illustrate how experiments in closed-loop system can be planned and analyzed. A case study based on the Tennessee Eastman Process simulator run with a decentralized feedback control strategy (Matlab) (Lawrence Ricker, 1996) is discussed and presented. 
  •  
15.
  • Capaci, Francesca, et al. (författare)
  • The Revised Tennessee Eastman Process Simulator as Testbed for SPC and DoE Methods
  • 2019
  • Ingår i: Quality Engineering. - : Taylor & Francis. - 0898-2112 .- 1532-4222. ; 31:2, s. 212-229
  • Tidskriftsartikel (refereegranskat)abstract
    • Engineering process control and high-dimensional, time-dependent data present great methodological challenges when applying statistical process control (SPC) and design of experiments (DoE) in continuous industrial processes. Process simulators with an ability to mimic these challenges are instrumental in research and education. This article focuses on the revised Tennessee Eastman process simulator providing guidelines for its use as a testbed for SPC and DoE methods. We provide flowcharts that can support new users to get started in the Simulink/Matlab framework, and illustrate how to run stochastic simulations for SPC and DoE applications using the Tennessee Eastman process.
  •  
16.
  • Englund, Stefan, et al. (författare)
  • Granular Flow and Segregation Behavior
  • 2016
  • Konferensbidrag (refereegranskat)abstract
    • 1. Purpose of the presentation. Granular materials such as grain, gravel, powder or pellets can be thought as intermediate state of matter: They can sustain shear like a solid up to a point, but they can also flow (Behringer 1995). However, differences in particulate sizes, shapes or densities have been known to cause segregation when granular materials are flowing. Surface segregation has often been studied. The mechanisms of segregation on a surface are described in many articles (Makse 1999)(Gray, Gajjar et al. 2015)(Lumay, Boschini et al. 2013). Descriptions of segregation behaviour of granular flow below surfaces are less common. Literature related to bulk flow mostly describe a bulk containing a variety of granular sizes (Engblom, Saxén et al. 2012)(Jaehyuk Choi and Arshad Kudrolli and Martin,Z.Bazant 2005). Warehouses such as silos or binges constitute major segregation and mixing points in many granular material transport chains. Such warehouses also subject the granular media to flow or impact induced stresses. Traceability in these kind of continues or semi continues granular flow environments face many challenges. Adding in-situ sensors, so called PATs, is one way to trace material in a granular flow. It is, however, difficult to predict if the sensors experience the same physical stresses as the average granules do if the PATs segregate. To contain required electronics, these sensors with casings may need to be made larger than the bulk particles it is supposed to follow. It is therefore important to understand when larger particles segregate and how to design sensor casings to prevent segregation. However segregation of larger sized or different shaped particles added as single objects to homogeny sized particle flow has, to our knowledge not yet been studied and that is the purpose of this study.2. Results. We show the significant factors which affect segregation behaviour and how these modify segregation behaviour. Depending on shape on silo and type of flow during discharge we also show how shape, size and density on individual grains is depending on velocity rate in granular flow. 3. Research Limitations/Implications. The time consuming method of manually retrieving data of each individual particle and surrounding bulk material limit the volume of data that can be retrieved. Further research will implement Particle Image Velocimetry technology (PIV) and customised software to analyse metadata from experiments in a much more efficient way.4. Practical implications. Practical outcome as a result of this research is connected to the ability to trace batches in continues and semi continues supply chains in time and space. The possibility to design a decision model to a specific supply chain for more customized controlled quality and, as far as we know, completely new possibilities related root cause analyses of quality issues in the production or supply chain.5. Value of presentation. Even if the research is made in relation to local mining industry and the supply chain related to iron ore pellets, based on their value of this research, the greatest value is expected to pharmaceutical or any law and regulation controlled industry where it is such efficient traceability of any product on the market is essential.2. Method. Experiments have been performed using granules of different shapes and densities to study flow and segregation behaviour. The experiments have been performed in a transparent 2D model of a silo, designed to replicate warehouses along an iron ore pellets distribution chain. Bulk material consisting of granules representing iron ore have been discharged together with larger objects of different sizes representing sensors or RFID tags. Shape, size and density are modified on the larger objects while studying mixing, flow behaviour and segregation tendencies using video. Video analyses have been used to measure the flow speed and flow distribution of the bulk and of the larger objects. The video material and individual particles is then statistically analysed to clarify significant factors in segregation behaviours related to the size, form and density of the particles. The results are based on Design Expert, Minitab and customized Matlab software.
  •  
17.
  • Holmbom, Martin, et al. (författare)
  • Performance-based logistics – an illusive panacea or a concept for the future?
  • 2014
  • Ingår i: Journal of Manufacturing Technology Management. - 1741-038X .- 1758-7786. ; 25:7, s. 958-979
  • Tidskriftsartikel (refereegranskat)abstract
    • Purpose– The purpose of this paper is to summarize previously reported benefits, drawbacks and important aspects for implementation of performance-based logistics (PBL), and to identify knowledge gaps.Design/methodology/approach– This is a literature review based on 101 articles. The reviewed articles are relevant to PBL in particular, but also to performance contracting, product-service systems (PSS) and servitization in general. The research method involved database searches, filtering results and reviewing publications.Findings– PBL is a business concept that aims to reduce the customer's total costs for capital-intensive products and increase the supplier's profit. The design of the contract, performance measurements and payment models are important aspects for successful implementation. However, the authors find a reason for concern to be the lack of empirical evidence of the profitability of PBL for the customer and the supplier.Originality/value– This literature review of PBL also includes publications from the related research areas: performance contracting, PSS and servitization. Developing PBL can benefit from results in these research areas.
  •  
18.
  •  
19.
  • Kvarnström, Björn, et al. (författare)
  • Using RFID to improve traceability in process industry : experiments in a distribution chain for iron ore pellets
  • 2010
  • Ingår i: Journal of Manufacturing Technology Management. - : Emerald. - 1741-038X .- 1758-7786. ; 21:1, s. 139-154
  • Tidskriftsartikel (refereegranskat)abstract
    • Purpose: The purpose of the article is to explore the application of Radio Frequency Identification (RFID) to improve traceability in a flow of granular products and to illustrate examples of special issues that need to be considered when using the RFID technique in a process industry setting.Design/methodology/approach: The article outlines a case study at a Swedish mining company including experiments to test the suitability of RFID to trace iron ore pellets (a granular product) in parts of the distribution chain.Findings: The results show that the RFID technique can be used to improve traceability in granular product flows. A number of special issues concerning the use of RFID in process industries are also highlighted, for example, the problems to control the orientation of the transponder in the read area and the risk of product contamination in the supply chain.Research limitations/implications: Even though only a single case has been studied, the results are of a general interest for industries that have granular product flows. However, future research in other industries should be performed to validate the results.Practical implications: The application of RFID described in this article makes it possible to increase productivity and product quality by improving traceability in product flows where traceability normally is problematic. Originality/value: Prior research has mainly focused on RFID applications in discontinuous processes. By contrast, this article presents a novel application of the RFID technique in a continuous process together with specific issues connected to the use of RFID.
  •  
20.
  • Larsson Turtola, Simon, et al. (författare)
  • Integrating mixture experiments and six sigma methodology to improve fibre‐reinforced polymer composites
  • 2022
  • Ingår i: Quality and Reliability Engineering International. - : John Wiley & Sons. - 0748-8017 .- 1099-1638. ; 38:4, s. 2233-2254
  • Tidskriftsartikel (refereegranskat)abstract
    • This article illustrates a Six Sigma project aimed at reducing manufacturing-induced visual deviations for fibre-reinforced polymer (FRP) composites. For a European composites manufacturer, such visual deviations lead to scrapping of cylindrical composite bodies and subsequent environmental impact. The composite bodies are manufactured through vacuum infusion, where a resin mixture impregnates a fibreglass preform and cures, transforming from liquid to solid state. We illustrate the define-measure-analyse-improve-control (DMAIC) steps of the Six Sigma project. Specific emphasis is placed on the measure and analyse steps featuring a 36-run computer-generated mixture experiment with six resin mixture components and six responses. Experimental analysis establishes causal relationships between mixture components and correlated resin characteristics, which can be used to control resin characteristics. Two new resin mixtures were developed and tested in the improve step using the understanding developed in previous steps. Manufacturing-induced visual deviations were greatly reduced by adjusting the resin mixture to induce a slower curing process. Further refinement of the mixture was made in the control step. A production scrap rate of 5% due to visual deviations was measured during a monitoring period of 5 months after the resin mixture change. The scrap rate was substantially improved compared to the historical level (60%). The successful experimental investigation integrated in this Six Sigma project is expected to generate increased quality, competitiveness, and substantial savings.
  •  
21.
  • Lundkvist, Peder, et al. (författare)
  • Identifying Process Dynamics through a Two-Level Factorial Experiment
  • 2014
  • Ingår i: Quality Engineering. - : Informa UK Limited. - 0898-2112 .- 1532-4222. ; 26:2, s. 154-167
  • Tidskriftsartikel (refereegranskat)abstract
    • Industrial experiments are often subjected to critical disturbances and in a small design with few runs the loss of experimental runs may dramatically reduce analysis power. This article considers a common situation in process industry where the observed responses are represented by time series. A time series analysis approach to analyze two-level factorial designs affected by disturbances is developed and illustrated by analyzing a blast furnace experiment. In particular, a method based on transfer function-noise modeling is compared with a ‘traditional’ analysis using averages of the response in each run as the single response in an analysis of variance (ANOVA).
  •  
22.
  • Lundkvist, Peder, et al. (författare)
  • Statistical methods – still ignored? The testimony of Swedish alumni
  • 2020
  • Ingår i: Total Quality Management and Business Excellence. - : Taylor & Francis. - 1478-3363 .- 1478-3371. ; 31:3-4, s. 245-262
  • Tidskriftsartikel (refereegranskat)abstract
    • Researchers have promoted statistical improvement methods as essential for product and process improvement for decades. However, studies show that their use has been moderate at best. This study aims to assess the use of statistical process control (SPC), process capability analysis, and design of experiments (DoE) over time. The study also highlights important barriers for the wider use of these methods in Sweden as a follow-up study of a similar Swedish study performed in 2005 and of two Basque-based studies performed in 2009 and 2010. While the survey includes open-ended questions, the results are mainly descriptive and confirm results of previous studies. This study shows that the use of the methods has become more frequent compared to the 2005 study. Larger organisations (>250 employees) use the methods more frequently than smaller organisations, and the methods are more widely utilised in the industry than in the service sector. SPC is the most commonly used of the three methods while DoE is least used. Finally, the greatest barriers to increasing the use of statistical methods were: insufficient resources regarding time and money, low commitment of middle and senior managers, inadequate statistical knowledge, and lack of methods to guide the user through experimentations.
  •  
23.
  • Sedghi, Mahdieh, 1984-, et al. (författare)
  • A Taxonomy of Railway Track Maintenance Planning and Scheduling : A Review and Research Trends
  • 2021
  • Ingår i: Reliability Engineering & System Safety. - : Elsevier. - 0951-8320 .- 1879-0836. ; 215
  • Forskningsöversikt (refereegranskat)abstract
    • Railway track maintenance and renewal are vital for railway safety, train punctuality, and travel comfort. Therefore, having cost-effective maintenance is critical in managing railway infrastructure assets. There has been a considerable amount of research performed on mathematical and decision support models for improving the application of railway track maintenance planning and scheduling. This article reviews the literature in decision support models for railway track maintenance planning and scheduling and transforms the results into a problem taxonomy. Furthermore, the article discusses current approaches in optimising maintenance planning and scheduling, research trends, and possible gaps in the related decision-making models.
  •  
24.
  • Sedghi, Mahdieh, 1984- (författare)
  • Data-driven predictive maintenance planning and scheduling
  • 2020
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The railway track network is one of the major modes of transportation and among a country’s most valuable infrastructure assets. Maintenance and renewal of railway infrastructure have a vital role in safety performance, the quality of the ride, train punctuality, and the life cycle cost of assets. Due to the large proportion of maintenance costs, increasing the efficiency of maintenance through optimised planning can result in high amounts of cost-saving. Moreover, from a safety perspective, late maintenance intervention can result in defective track and rollingstock components, which in severe cases, can cause severe accidents such as derailments.An effective maintenance management system is required to ensure the availability of the infrastructure system and meet the increasing capacity demand. The recent rapid technological revolution and increasing deployment of sensors and connected devices created new possibilities to increase the maintenance strategy effectiveness in the railway network. The purpose of this thesis is to expand the knowledge and methods for planning and scheduling of railway infrastructure maintenance. The research vision is to find quantitative approaches for integrated tactical planning and operational scheduling of predictive condition-based maintenance which can be put to practical use and improve the efficiency of the railway system.First, a thorough literature review study is performed to identify improvement policies for maintenance planning and scheduling in the literature and also to analyse the current approaches in optimising the maintenance planning and scheduling problem. Second, a novel data-driven multi-level decision-making framework to improve the efficiency of maintenance planning and scheduling is developed. The proposed framework aims to support the selection of track segments for maintenance by providing a practical degradation prediction model based on available condition measurement data. The framework considers the uncertainty of future predictions using the probability of surpassing a maintenance limit instead of using the predicted value. Moreover, an extensive total maintenance cost formulation is developed to include both direct and indirect preventive and corrective costs to observe the effect of using cost optimisation and grouping algorithms at the operational scheduling level.The performance of the proposed framework is evaluated through a case study based on data from a track section of the iron ore line between Boden and Luleå. The results indicate that the proposed approach can lead to cost savings in both optimal and grouping plans. This framework may be a useful decision support tool in the automated planning and scheduling of maintenance based on track geometry measurements.
  •  
25.
  •  
26.
  • Sedghi, Mahdieh, 1984-, et al. (författare)
  • Data‐driven maintenance planning and scheduling based on predicted railway track condition
  • 2022
  • Ingår i: Quality and Reliability Engineering International. - : John Wiley & Sons. - 0748-8017 .- 1099-1638. ; 38:7, s. 3689-3709
  • Tidskriftsartikel (refereegranskat)abstract
    • Timely planning and scheduling of railway infrastructure maintenance interventions are crucial for increased safety, improved availability, and reduced cost. We propose a data-driven decision-support framework integrating track condition predictions with tactical maintenance planning and operational scheduling. The framework acknowledges prediction uncertainties by using a Wiener process-based prediction model at the tactical level. We also develop planning and scheduling algorithms at the operational level. One algorithm focuses on cost-optimisation, and one algorithm considers the multi-component characteristics of the railway track by grouping track segments near each other for one maintenance activity. The proposed framework's performance is evaluated using track geometry measurement data from a 34 km railway section in northern Sweden, focusing on the tamping maintenance action. We analyse maintenance costs and demonstrate potential efficiency increases by applying the decision-support framework.
  •  
27.
  • Vanhatalo, Erik, et al. (författare)
  • A designed experiment in a continuous process
  • 2007
  • Ingår i: Proceedings from the 10th QMOD Conference. - : Lunds University, Campus Helsingborg.
  • Konferensbidrag (refereegranskat)abstract
    • This paper discusses the design and analysis of an experiment performed in a continuous process (CP). A full factorial design with replicates is used to test three types of pellets on two levels of a process variable in an experimental blast furnace process. Issues and considerations concerning the experimental design and analysis are discussed. For example, an adaptive experimental design is used. We propose a multivariate approach to the analysis of the experiment, in form of principal component analysis combined with analysis of variance. The factorial design in CPs is found to have a promising potential. However, CPs also demand special considerations when planning, performing and analyzing experiments, and therefore further development of experimental strategies and connected methods of analysis for CPs is needed.
  •  
28.
  • Vanhatalo, Erik, et al. (författare)
  • A method to determine transition time for experiments in dynamic processes
  • 2009
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Planning, conducting, and analyzing experiments performed in dynamic processes, such as continuous processes, highlight issues that the experimenter needs to consider, for example, process dynamics (inertia) and the multitude of responses. Dynamic systems exhibit a delay (transition time) the change of an experimental factor and when the response is affected. The transition time affects the required length of each experimental run in dynamic processes and long transition times may call for restrictions of the randomization of runs. By contrast, in many processes in parts production this change is almost immediate. Knowledge about the transition time helps the experimenter to avoid experimental runs that are either too short for a new steady-state to be reached, and thus incorrect estimation of treatment effects, or unnecessarily long and costly. Furthermore, knowing the transition time is important during analysis of the experiment.Determining the transition time in a dynamic process can be difficult since the processes often are heavily instrumented with a multitude of responses. The process responses are typically correlated and react to the same underlying events. Hence, multivariate statistical tools such as principal component analysis (PCA) are often beneficial during analysis. Furthermore, the responses are often highly positively autocorrelated due to frequent sampling. We propose a method to determine the transition time between experimental runs in a dynamic process. We use PCA to summarize the systematic variation in a multivariate response space. The time series analysis techniques ‘transfer function-noise modeling' or ‘intervention analysis' are then used to model the dynamic relation between an input time series event and output time series response using the principal component scores. We illustrate the method by estimating the transition time for treatment changes in an experimental blast furnace. This knowledge provides valuable input to the planning and analysis phase of the experiments in the process.
  •  
29.
  • Vanhatalo, Erik, et al. (författare)
  • A method to determine transition time for experiments in dynamic processes
  • 2011
  • Ingår i: Quality Engineering. - : Informa UK Limited. - 0898-2112 .- 1532-4222. ; 23:1, s. 30-45
  • Tidskriftsartikel (refereegranskat)abstract
    • Process dynamics is an important consideration during the planning phase of designed experiments in dynamic processes. After changes of experimental factors, dynamic processes undergo a transition time before reaching a new steady state. To minimize experimental time and reduce costs and for experimental design and analysis, knowledge about this transition time is important. In this article, we propose a method to analyze process dynamics and estimate the transition time by combining principal component analysis and transfer function–noise modeling or intervention analysis. We illustrate the method by estimating transition times for a planned experiment in an experimental blast furnace.
  •  
30.
  •  
31.
  • Vanhatalo, Erik (författare)
  • Contributions to the use of designed experiments in continuous processes : a study of blast furnace experiments
  • 2007
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Design of Experiments (DoE) contains techniques, such as factorial designs, that help experimenters maximize the information output from conducted experiments and minimize the amount of experimental work required to reach statistically significant results. The use of DoE in industrial processes is frequently and thoroughly described in literature. However, continuous processes in industry, frequently found in, for example, the mining and steel industries, highlight special issues that are typically not addressed in the DoE literature. The purpose of this research is to contribute to an increased knowledge of the use of DoE in continuous processes and aims to investigate if factorial designs and other existing techniques in the DoE field are effective tools also in continuous processes. Two studies have been performed. The focus of the first study, a case study of an industrial blast furnace operation, is to explore the potential of using factorial designs in a continuous process and to develop an effective analysis procedure for the experiments in a continuous process. The first study includes, for example, interviews, experiments, and large elements of action research. The focus of the second study is to explore how a-priori process knowledge can be used to increase the analysis sensitivity for unreplicated experiments. The second study includes a metastudy of experiments in literature as well as an experiment. The results show that it is possible to use factorial designs in a continuous process even though it is not straightforward and special considerations by the experimenter will be required. For example, the dynamic nature of continuous processes affects the minimum time required for each run in an experiment since a transient time period is needed between each run to allow the experimental treatments to reach full effect in the process. Therefore, the use of split-plot designs is recommended since it can be hard to completely randomize the experimental run order. It is also found that process control, during the conduction of the experiment, may be unavoidable in continuous processes. Thus, developing a process control strategy during the planning phase is found to be an important experimental success factor. Furthermore, the results indicate that the multitude of cross-correlated response variables typical for continuous processes can be problematic during the planning phase of the experiment. The many and cross-correlated response variables are also reasons to why multivariate statistical techniques, such as principal component analysis, can make an important contribution during the analysis. Moreover, a-priori process knowledge is confirmed to have a positive effect on analysis sensitivity for unreplicated experiments. Since experimental effects in continuous processes can be expected to be small compared to noise, a-priori process knowledge can also make a valuable contribution during analysis of experiments in continuous processes. Furthermore, activities like coordination of people, information and communication as well as logistics planning are found as important parts of the experimental effort in continuous processes.
  •  
32.
  • Vanhatalo, Erik, et al. (författare)
  • Impact of Autocorrelation on Principal Components and Their Use in Statistical Process Control
  • 2016
  • Ingår i: Quality and Reliability Engineering International. - : Wiley. - 0748-8017 .- 1099-1638. ; 32:4, s. 1483-1500
  • Tidskriftsartikel (refereegranskat)abstract
    • A basic assumption when using principal component analysis (PCA) for inferential purposes, such as in statistical process control (SPC) is that the data are independent in time. In many industrial processes frequent sampling and process dynamics make this assumption unrealistic rendering sampled data autocorrelated (serially dependent). PCA can be used to reduce data dimensionality and to simplify multivariate SPC. Although there have been some attempts in the literature to deal with autocorrelated data in PCA, we argue that the impact of autocorrelation on PCA and PCA-based SPC is neither well understood nor properly documented.This article illustrates through simulations the impact of autocorrelation on the descriptive ability of PCA and on the monitoring performance using PCA-based SPC when autocorrelation is ignored. In the simulations cross- and autocorrelated data are generated using a stationary first order vector autoregressive model.The results show that the descriptive ability of PCA may be seriously affected by autocorrelation causing a need to incorporate additional principal components to maintain the model’s explanatory ability. When all variables have the same autocorrelation coefficients the descriptive ability is intact while a significant impact occurs when the variables have different degrees of autocorrelation. We also illustrate that autocorrelation may impact PCA-based SPC and cause lower false alarm rates and delayed shift detection, especially for negative autocorrelation. However, for larger shifts the impact of autocorrelation seems rather small.
  •  
33.
  • Vanhatalo, Erik, et al. (författare)
  • Lag Structure in Dynamic Principal Component Analysis
  • 2016
  • Konferensbidrag (refereegranskat)abstract
    • Purpose of this PresentationAutomatic data collection schemes and abundant availability of multivariate data increase the need for latent variable methods in statistical process control (SPC) such as SPC based on principal component analysis (PCA). However, process dynamics combined with high-frequency sampling will often cause successive observations to be autocorrelated which can have a negative impact on PCA-based SPC, see Vanhatalo and Kulahci (2015).Dynamic PCA (DPCA) proposed by Ku et al. (1995) has been suggested as the remedy ‘converting’ dynamic correlation into static correlation by adding the time-lagged variables into the original data before performing PCA. Hence an important issue in DPCA is deciding on the number of time-lagged variables to add in augmenting the data matrix; addressed by Ku et al. (1995) and Rato and Reis (2013). However, we argue that the available methods are rather complicated and lack intuitive appeal.The purpose of this presentation is to illustrate a new and simple method to determine the maximum number of lags to add in DPCA based on the structure in the original data. FindingsWe illustrate how the maximum number of lags can be determined from time-trends in the eigenvalues of the estimated lagged autocorrelation matrices of the original data. We also show the impact of the system dynamics on the number of lags to be considered through vector autoregressive (VAR) and vector moving average (VMA) processes. The proposed method is compared with currently available methods using simulated data.Research Limitations / Implications (if applicable)The method assumes that the same numbers of lags are added for all variables. Future research will focus on adapting our proposed method to accommodate the identification of individual time-lags for each variable. Practical Implications (if applicable)The visualization possibility of the proposed method will be useful for DPCA practitioners.Originality/Value of PresentationThe proposed method provides a tool to determine the number of lags in DPCA that works in a manner similar to the autocorrelation function (ACF) in the identification of univariate time series models and does not require several rounds of PCA. Design/Methodology/ApproachThe results are based on Monte Carlo simulations in R statistics software and in the Tennessee Eastman Process simulator (Matlab).
  •  
34.
  • Vanhatalo, Erik (författare)
  • Multivariate process monitoring of an experimental blast furnace
  • 2010
  • Ingår i: Quality and Reliability Engineering International. - : Wiley. - 0748-8017 .- 1099-1638. ; 26:5, s. 495-508
  • Tidskriftsartikel (refereegranskat)abstract
    • Process monitoring by use of multivariate projection methods has received increasing attention as it can reduce the monitoring problem for richly instrumented industrial processes with many correlated variables. This article discusses the monitoring and control of a continuously operating experimental blast furnace (EBF). A case study outlines the need for monitoring and control of the EBF and the use of principal components (PCs) to monitor the thermal state of the process. The case study addresses design, testing and online application of PC models for process monitoring. The results show how the monitoring problem can be reduced to following just a few PCs instead of many original variables. The case study highlights the problem of multivariate monitoring of a process with frequently shifting operating modes and process drifts and stresses the choice of a good reference data set of normal process behavior. Possible solutions for adaptations of the multivariate models to process changes are also discussed.
  •  
35.
  • Vanhatalo, Erik (författare)
  • On design of experiments in continuous processes
  • 2009
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Design of Experiments (DoE) includes powerful methods, such as factorial designs, to help maximize the information output from conducted experiments while minimizing the experimental work required for statistically significant results. The benefits of using DoE in industry are thoroughly described in the literature although the actual use of the methods in industry is far from being pervasive. Continuous processes, frequently found in the process industry, highlight special issues that are typically not addressed in the DoE literature. The overall objective of this research is to increase the knowledge of DoE in continuous processes. More specifically, the aims of this research are [1] to identify, explore, and describe potential problems that can occur when planning, conducting, and analyzing experiments in continuous processes, and [2] to propose methods of analysis that help the experimenter in continuous processes tackle some of the identified problems.This research has focused on developing analysis procedures adapted for experiments in continuous processes using a combination of existing DoE methods and methods from the related fields: multivariate statistical methods and time series analysis. The work uses real industrial data as well as simulations. The method is dominated by the study of the practical use of DoE methods and the developed analysis procedures using an industrial case - the LKAB Experimental Blast Furnace plant.The results are presented in six appended papers. Paper A provides a tentative overview of special considerations that the experimenter needs to consider in the planning phase of an experiment in a continuous process. Examples of important experimental complications further discussed in the papers are: their multivariate nature, their dynamic characteristics, the need for randomization restrictions due to experimental costs, the need for process control during experimentation, and the time series nature of the responses. Paper B develops a method to analyze factorial experiments with randomization restrictions using principal components combined with analysis of variance. Paper C shows how the use of the multivariate projection method principal component analysis can reduce the monitoring problem for a process with many and correlated variables. Paper D focuses on the dynamic characteristic of continuous processes and presents a method to determine the transistion time between experimental runs combining principal components and transfer function-noise models and/or intervention analysis. Paper E further addresses the time series aspects of responses from continuous processes and illustrates and compares different methods to analyze two-level factorials with time series responses to estimate location effects. In particular, Paper E shows how multiple interventions with autoregressive integrated moving average models for the noise can be used to effectively analyze experiments in continuous processes. Paper F develops a Bayesian procedure, adapted from Box and Meyer (1986), to calculate posterior probabilities of active effects for unreplicated twolevel factorials, successively considering the sparsity, hierarchy, and heredity principles.
  •  
36.
  • Vanhatalo, Erik, et al. (författare)
  • On the structure of dynamic principal component analysis used in statistical process monitoring
  • 2017
  • Ingår i: Chemometrics and Intelligent Laboratory Systems. - : Elsevier. - 0169-7439 .- 1873-3239. ; 167, s. 1-11
  • Tidskriftsartikel (refereegranskat)abstract
    • When principal component analysis (PCA) is used for statistical process monitoring it relies on the assumption that data are time independent. However, industrial data will often exhibit serial correlation. Dynamic PCA (DPCA) has been suggested as a remedy for high-dimensional and time-dependent data. In DPCA the input matrix is augmented by adding time-lagged values of the variables. In building a DPCA model the analyst needs to decide on (1) the number of lags to add, and (2) given a specific lag structure, how many principal components to retain. In this article we propose a new analyst driven method to determine the maximum number of lags in DPCA with a foundation in multivariate time series analysis. The method is based on the behavior of the eigenvalues of the lagged autocorrelation and partial autocorrelation matrices. Given a specific lag structure we also propose a method for determining the number of principal components to retain. The number of retained principal components is determined by visual inspection of the serial correlation in the squared prediction error statistic, Q (SPE), together with the cumulative explained variance of the model. The methods are illustrated using simulated vector autoregressive and moving average data, and tested on Tennessee Eastman process data.
  •  
37.
  • Vanhatalo, Erik, et al. (författare)
  • Special considerations when planning experiments in a continuous process
  • 2007
  • Ingår i: Quality Engineering. - : Informa UK Limited. - 0898-2112 .- 1532-4222. ; 19:3, s. 155-169
  • Tidskriftsartikel (refereegranskat)abstract
    • Discontinuous processes dominate experimental applications in practice as well as in literature. Continuous processes constitute a significant part of goods production, and the need to gain knowledge using experiments are as relevant in such environments as in, for example, parts production. We argue that the characteristics of continuous processes affect the prerequisites for experimental efforts to such an extent that they need special attention. To describe considerations when planning experiments in a continuous process, experiments performed in a blast furnace process are studied. We propose a tentative list of special considerations, which are discussed and summarized in a thirteen-step check list.
  •  
38.
  • Vanhatalo, Erik, et al. (författare)
  • The Effect of Autocorrelation on the Hotelling T2 Control Chart
  • 2015
  • Ingår i: Quality and Reliability Engineering International. - : Wiley. - 0748-8017 .- 1099-1638. ; 31:8, s. 1779-1796
  • Tidskriftsartikel (refereegranskat)abstract
    • One of the basic assumptions for traditional univariate and multivariate control charts is that the data are independent in time. For the latter in many cases the data is serially dependent (autocorrelated) and cross-correlated due to, for example, frequent sampling and process dynamics. It is well-known that the autocorrelation affects the false alarm rate and the shift detection ability of the traditional univariate control charts. However, how the false alarm rate and the shift detection ability of the Hotelling 2T control chart are affected by various auto- and cross-correlation structures for different magnitudes of shifts in the process mean is not fully explored in the literature. In this article, the performance of the Hotelling T2 control chart for different shift sizes and various auto- and cross-correlation structures are compared based on the average run length (ARL) using simulated data. Three different approaches in constructing the Hotelling T2 chart are studied for two different estimates of the covariance matrix: [1] ignoring the autocorrelation and using the raw data with theoretical upper control limits; [2] ignoring the autocorrelation and using the raw data with adjusted control limits calculated through Monte Carlo simulations; and [3] constructing the control chart for the residuals from a multivariate time series model fitted to the raw data. To limit the complexity we use a first-order vector autoregressive process, VAR(1), and focus mainly on bivariate data.
  •  
39.
  • Vanhatalo, Erik, et al. (författare)
  • Towards improved analysis methods for two-level factorial experiments with time series responses
  • 2013
  • Ingår i: Quality and Reliability Engineering International. - : Wiley. - 0748-8017 .- 1099-1638. ; 29:5, s. 725-741
  • Tidskriftsartikel (refereegranskat)abstract
    • Dynamic processes exhibit a time delay between the disturbances and the resulting process response. Therefore, one has to acknowledge process dynamics, such as transition times, when planning and analyzing experiments in dynamic processes. In this article, we explore, discuss, and compare different methods to estimate location effects for two-level factorial experiments where the responses are represented by time series. Particularly, we outline the use of intervention-noise modeling to estimate the effects and to compare this method by using the averages of the response observations in each run as the single response. The comparisons are made by simulated experiments using a dynamic continuous process model. The results show that the effect estimates for the different analysis methods are similar. Using the average of the response in each run, but removing the transition time, is found to be a competitive, robust, and straightforward method, whereas intervention-noise models are found to be more comprehensive, render slightly fewer spurious effects, find more of the active effects for unreplicated experiments and provide the possibility to model effect dynamics. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.
  •  
40.
  • Vanhatalo, Erik, et al. (författare)
  • Using factorial design and multivariate analysis when experimenting in a continuous process
  • 2008
  • Ingår i: Quality and Reliability Engineering International. - : Wiley. - 0748-8017 .- 1099-1638. ; 24:8, s. 983-995
  • Tidskriftsartikel (refereegranskat)abstract
    • This article discusses the design and analysis of an experiment performed in a continuous process (CP). Three types of iron ore pellets are tested on two levels of a process variable in an experimental blast furnace process, using a full factorial design with replicates. A multivariate approach to the analysis of the experiment in the form of principal component analysis combined with analysis of variance is proposed. The analysis method also considers the split-plot-like structure of the experiment. The article exemplifies how a factorial design combined with multivariate analysis can be used to perform product development experiments in a CP. CPs also demand special considerations when planning, performing and analyzing experiments. The article highlights and discusses such issues and considerations, for example, the dynamic characteristic of CPs, a strategy to handle disturbances during experimentation and the need for process control during experimentation.
  •  
41.
  • Zia, Shafaq (författare)
  • Non-destructive assessment of additively manufactured objects using ultrasound
  • 2024
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Additive manufacturing (AM) enables the manufacturing of complex and tailored products for an unlimited number of applications such as aerospace, healthcare, etc. The technology has received a lot of attention in lightweight applications where it is associated with new design possibilities but also reduced material costs, material waste, and energy consumption. The use of ultrasound has the potential to become the material characterization method used for AM since it is quick, safe, and scales well with component size. Ultrasound data, coupled with supervised learning techniques, serves as a powerful tool for the non-destructive evaluation of different materials, such as metals.This research focuses on understanding the additive manufacturing process, the resulting material properties, and the variation captured using ultrasound due to the manufacturing parameters. The case study included in this thesis is the examination of 316L steel cubes manufactured using laser powder bed fusion. This study includes the estimation and prediction of manufacturing parameters using supervised learning, the assessment of the influence of the manufacturing parameters on the variability within samples, and the quantitative quality assessment of the samples based on the material properties that are a result of the changes in manufacturing parameters.The research is vital for analyzing the homogeneity of microstructures, advancement in online process control, and ensuring the quality of additively manufactured products. This study contributes to valuable insights into the relationship between manufacturing parameters, material properties, and ultrasound signatures. There is a significant variation captured using ultrasound within the samples and between samples that shows the backscattered signal is sensitive to the microstructure that is a result of the manufacturing parameters. Since the material properties change with the change in manufacturing parameters, the quality of a sample can be described by the relation between the material properties and backscattered ultrasound signals.The thesis is divided into two parts. The first part focuses on the introduction of the study, a summary of the contributions, and future work. The second part contains a collection of papers describing the research in detail.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-41 av 41

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy