SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L4X0:1402 1544 ;srt2:(2010-2014)"

Sökning: L4X0:1402 1544 > (2010-2014)

  • Resultat 21-30 av 290
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
21.
  • Andersson, Sven (författare)
  • The fuzzy front end of product innovation processes : the influence of uncertainty, equivocality, and dissonance in social processes of evolving product concepts
  • 2010
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Developing new products is essential for the long-term survival of companies. The fuzzy front end (FFE) is the first phase in the product innovation process and is considered an important determinant for successful product innovation. This thesis addresses the social process in which individuals evolve a product concept in the fuzzy front end. In the FFE individuals must evolve a clear view of 'customer', 'competitor', 'resource' and 'technical solution' aspects regarding the product concept before a go/no-go decision is made and the product concept proceeds to implementation in the development phase. The clearness required regarding these four aspects is acquired through the social process, where individuals think, act, and interact in relation to ‘the self' and significant others. The social process in FFEs is addressed through three research questions. The first general research question is; (1) how do product concepts evolve through the social process in success and failure FFEs? From the general research questions, two specific research questions are addressed: (2) how do uncertainty, equivocality and dissonance influence the social process when evolving a product concept in the FFE? And (3) how do individuals cope with uncertainty, equivocality and dissonance when evolving a product concept in the FFE? To answer the research questions, data have been collected using the repertory grid technique, the techniques for analyzing social networks and alter-ego networks, and narratives. The data collected derives from four companies which were selected to maximize differences in terms of technologies between companies and thus, differences in the FFEs. Within the four companies 32 fuzzy front ends of product innovation processes have been studied, and one success and one failure FFEs are described for each company. In total, 22 respondents were interviewed regarding 23 successful and 9 failure projects. The data have been analyzed on both the individual and group level. The analyses involved repertory grid analysis in order to identify how individuals construct uncertainty, equivocality and dissonance in their frames of reference. The repertory grid analyses also provide information about relations in the social process regarding thoughts and interactions in success and failure FFEs and distinctive thought patterns, i.e. homogeneity on the group level. The analyses of narratives provide pictures and information about the FFEs and how individuals addressed uncertainty, equivocality and dissonance. The main findings are that (1) dissonance is a central concept to address in the fuzzy front end in order to understand how clearness of a product concept evolves, and (2) the identification of relations between thought, action, and interaction on the one hand and uncertainty, equivocality, and dissonance on the other, which helps us understand the differences between uncertainty, equivocality and dissonance. Lastly, the findings (3) show that differences exist in the social process based on the type of technology characterizing daily production in the companies.
  •  
22.
  • Andersson, Tobias (författare)
  • Estimating particle size distributions based on machine vision
  • 2010
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis contributes to the field of machine vision and the theory of the sampling of particulate material on conveyor belts. The objective is to address sources of error relevant to surface-analysis techniques when estimating the sieve-size distribution of particulate material using machine vision. The relevant sources of error are segregation and grouping error, capturing error, profile error, overlapping-particle error and weight-transformation error. Segregation and grouping error describes the tendency of a pile to separate into groups of similarly sized particles, which may bias the results of surface-analysis techniques. Capturing error describes the varying probability, based on size, that a particle will appear on the surface of the pile, which may also bias the results of surface-analysis techniques. Profile error is related to the fact that only one side of an entirely visible particle can be seen, which may bias the estimation of particle size. Overlapping-particle error occurs when many particles are only partially visible, which may bias the estimation of particle size because large particles may be treated as smaller particles. Weight-transformation error arises because the weight of particles in a specific sieve-size class might significantly vary, resulting in incorrect estimates of particle weights. The focus of the thesis is mainly on solutions for minimizing profile error, overlapping-particle error and weight-transformation error.In the aggregates and mining industries, suppliers of particulate materials, such as crushed rock and pelletised iron ore, produce materials for which the particle size is a key differentiating factor in the quality of the material. Manual sampling and sieving techniques are the industry-standard methods for estimating the size distribution of these particles. However, as manual sampling is time consuming, there are long response times before an estimate of the sieve-size distributions is available. Machine-vision techniques promise a non-invasive, frequent and consistent solution for determining the size distribution of particles. Machine-vision techniques capture images of the surfaces of piles, which are analyzed by identifying each particle on the surface of the pile and estimating its size. Sampling particulate material being transported on conveyor belts using machine vision has been an area of active research for over 25 years. However, there are still a number of sources of error in this type of sampling that are not fully understood. To achieve a high accuracy and robustness in the analysis of captured surfaces, detailed experiments were performed in the course of this thesis work, towards the development and validation of techniques for minimizing overlapping-particle error, profile error and weight-transformation error. To minimise overlapping-particle error and profile error, classification algorithms based on logistic regression were proposed. Logistic regression is a statistical classification method that is used for visibility classification to minimize overlapping-particle error and in particle-size classification to minimize profile error. Commonly used size- and shape-measurement methods were evaluated using feature-selection techniques, to find sets of statistically significant features that should be used for the abovementioned classification tasks. Validation using data not used for training showed that these errors can be overcome.The existence of an effect that causes weight-transformation error was identified using statistical analysis of variance (ANOVA). Methods to minimize weight-transformation error are presented herein, and one implementation showed a good correlation between the results using the machine-vision system and manual sieving results.The results presented in this thesis show that by addressing the relevant sources of error, machine vision techniques allow for robust and accurate analysis of particulate material. An industrial prototype was developed that estimates the sieve-size distribution of iron-ore pellets in a pellet plant and crushed limestone in a quarry during ship loading. The industrial prototype also enables product identification of crushed limestone to prevent the loading of incorrectly sized products.
  •  
23.
  • Andersson, Ulf (författare)
  • Automation and traction control of articulated vehicles
  • 2013
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Articulated machines such as load-haul-dump machines, wheel loaders and haulers operate in many different environments and driving conditions. In particular they need to be able to perform well with road conditions and loads that can change drastically, setting hard requirements on performances and robustness. The control challenges for off-road vehicles are hence quite different from standard cars or trucks, which mostly drive on regular roads. An important aspect characterising this is the fact that wheel slip may cause severe damage to the wheels and ground. Particularly, tyre lifespan is a serious problem since for instance in a modern hauler the tyres often represents 20%-25% of a hauler overall operating cost. Better traction control algorithms can strongly contribute to reducing tyre wear and hence operating costs.Increasing fuel prices and increasing environmental awareness have influenced all the main vehicle manufacturers so that the commitment towards less fuel consumption has become one of the main goals for development. During the last few years’ hybrid vehicles have been vigorously developed. For wheel loaders, in particular, the series hybrid concept seems to be suitable whereby a diesel engine generates electricity for a battery that serves as the power source of the individual wheel motors, enabling regenerative braking as well as partial recovery of the energy necessary to lift the load. Hence, traction control algorithms should be adapted for use with individual wheel drives.Load-haul-dump machines, wheel loaders and haulers are sometimes used in cyclic operations in isolated areas, which is a typical driver for automation. The use of the loadhaul-dump machine in underground hard rock mines such as iron ore mines is one example where the conditions for automation are excellent. The working conditions for a driver in the cabin are monotone. The working conditions are improved by moving the driver from the machine to a control room and alternate between different remote operations, for instance between load-haul-dump machines and remote controlled rock breaker. Moving the driver from the cabin to the control room also have a positive effect on the personnel costs since one operator can handle several machines.However, for the automation to be successful, the cycle time and loading capacity of an automated machine has to match a manual machine operated by skilled drivers. A challenge is the remote bucket filling, where traditional tele remote loading is based only on slightly delayed video feedback from the machine. This is in sharp contrast to the manual loading where the driver close the loop based on non-delayed 3D vision of the machine relative the pile as well as listening to the noise and sensing the vibrations of the machine.
  •  
24.
  • Arasteh Khouy, Iman (författare)
  • Cost-effective maintenance of railway track geometry : a shift from safety limits to maintenance limits
  • 2013
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Railway infrastructure is a complex system which comprises different subsystems. Long life span is one of the important aspects of this prime mode of transport. However, the useful life of its assets is highly dependent on the maintenance and renewal strategy used during the assets’ life cycle. Today’s demands on the railway industry call for increased capacity, including more trains, travelling at higher speeds with higher axle loads. This increased usage results in higher degradation of railway assets and higher maintenance costs. Formerly, railway maintenance procedures were usually planned based on the knowledge and experience of the infrastructure owner. The main goal was to provide a high level of safety, and there was little concern for economic issues. Today, however, the deregulated competitive environment and budget limitations are forcing railway infrastructures to move from safety limits to cost-effective maintenance limits to optimise operation and maintenance procedures. The goal is to make operation and maintenance cost-effective while still meeting high safety standards.One of the main parameters to assure railway safety and comfortable railway service is to maintain high quality of track geometry. Poor quality of track geometry, directly or indirectly, may result in safety problems, speed reduction, traffic disruption, greater maintenance cost and higher degradation rate of the other railway components (e.g. rails, wheels, switches and crossings etc.). The aim of this study is to develop a methodology to optimise track geometry maintenance by specifying cost-effective maintenance limits. The methodology is based on reliability and cost analysis and supports the maintenance decision-making process. The thesis presents a state-of-the-art review of track geometry degradation and maintenance optimisation models. It also includes a case study carried out on the iron ore line in the north of Sweden to analyse the track geometry degradation and discuss possible reasons for the distribution of failures along the track over a year. It describes Trafikverket’s (Swedish Transport Administration) maintenance strategy regarding measuring, reporting on and improving track quality, and it evaluates the efficiency of this strategy. It introduces two new approaches to analyse the geometrical degradation of turnouts due to dynamic forces generated from train traffic. In the first approach, the recorded measurements are adjusted at crossing point and then the relative geometrical degradation of turnouts is evaluated by using two defined parameters, the absolute residual area (ARa) and the maximum settlement (Smax). In the second approach, various geometry parameters are defined to estimate the degradation in each measurement separately. It also discusses optimisation of the track geometry inspection interval with a view to minimising the total ballast maintenance costs per unit traffic load. The proposed model considers inspection time and the maintenance-planning horizon time after inspection and takes into account the costs associated with inspection, tamping and risk of accidents due to poor track quality. Finally, it proposes a cost model to identify the cost-effective maintenance limit for track geometry maintenance. The model considers the actual longitudinal level degradation rates of different track sections as a function of million gross tonnes (MGT) / time and the observed maintenance efficiency.
  •  
25.
  • Arendarenko, Larissa (författare)
  • Estimates for Hardy-type integral operators in weighted Lebesgue spaces
  • 2013
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This PhD thesis deals with the theory of Hardy-type inequalities in a new situation, namely when the classical Hardy operator is replaced by a more general operator with a kernel. The kernels we consider belong to the new classes $\mathcal{O}^+_n$ and $\mathcal{O}^-_n$, $n=0,1,...$, which are wider than co-called Oinarov class of kernels. This PhD thesis consists of four papers (papers A, B, C and D), two complementary appendixes (A$_1$, C$_1$) and an introduction, which put these publications into a more general frame. This introduction also serves as a basic overview of the field. In paper A some boundedness criteria for the Hardy-Volterra integral operators are proved and discussed. The case $1
  •  
26.
  • Arranz, Miguel Castaño (författare)
  • Practical tools for the configuration of control structures
  • 2012
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Process industries have to operate in a very competitive and globalized environment, requiring efficient and sustainable production processes. Production targets need to be translated into control objectives and are usually formulated as performance specifications of the process. The controller design is a difficult task which involves assumptions and simplifications because of the process complexity. Complexity arises often due to the large scale character of a process, i.e. a pulp and paper mill which can be composed by thousands of control loops. A critical step is the choice of the control configuration, which involves choosing a set of measurements to be used to calculate the control action for each actuator.Current methods for Control Configuration Selection (CCS) include Interaction Measures (IMs). The probably most widely used IM dates back to 1966 when the Relative Gain Array (RGA) was introduced by Bristol. However, these methods rarely become applied in industry, where control structures are often designed based on previous experience or common sense in interpreting process knowledge, but without the support of theoretical and systematic tools.The work in this thesis is oriented towards the development of these tools for industry application. Several topics on CCS are addressed to deal with this lack of practical use, including the robustness to model uncertainty, the need of parametric process models of the complex process, the lack of tools which present the information in connection to the process layout, and the delay from research to education and finally industry application.The main contribution of this thesis is on the consideration of model uncertainty in the CCS problem. Since uncertainty is an intrinsic property of all process models, the validity of the control configuration suggested by the IMs cannot be assessed by only analyzing the nominal model. This thesis introduces methods for the computation of the uncertainty bounds of two gramian-based IMs, which can be used to design robust control configurations.The requirement of process models is an important limitation for the use of the IMs, and the complexity of modeling increases with the number of process variables. This thesis presents novel results in the estimation of IMs, which aim to remove the need of parametric process models for the design of control configurations.CCS using IMs is a heuristic approach, being interpretation needed to select the process interconnections on which control will be based. The traditional IMs present information as an array of real numbers which is disjoint from the process layout. This thesis describes new methods for the interaction analysis of complex processes using weighted graphs, allowing integrating the analysis with process visualization for an increased process understanding.As final contribution, this thesis describes the development of the software tool ProMoVis (Process Modeling and Visualization), which is a platform in which state-of-the-art research in CCS is implemented for facilitating its use in industry applications.
  •  
27.
  • Awe, Samuel Ayowole (författare)
  • Antimony recovery from complex copper concentrates through hydro- and electrometallurgical processes
  • 2013
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Today, one of the major difficulties confronted during copper metallurgy is the elimination of antimony and arsenic impurities from the process. This is because the pure copper ore reserves are becoming exhausted and the resources of unexploited ores often contain relatively high amounts of antimony and arsenic. During smelting of copper concentrates, arsenic is easily removed into the offgas while antimony is not readily removed due to its lower partial pressure and high affinity for liquid copper. Therefore, removal of these impurities at an early stage of processing will be beneficial for the copper making process. The present research is aimed at (i) purifying impure complex copper sulphide concentrates by selectively dissolving the impurities, and consequently, upgrading the concentrates for pyrometallurgical processing, and (ii) depositing antimony as a marketable product from synthetic alkaline sulphide pregnant leach liquors by electrowinning. The mineralogical investigations conducted on the concentrates studied revealed that tetrahedrite, chalcopyrite, galena, sphalerite and pyrite were the common mineralogical phases present in the concentrates. Silver and arsenic were found as solid solution in the tetrahedrite crystal structure. Alkaline sulphide solution was used to dissolve antimony from the concentrates. Antimony recovery from tetrahedrite dissolution was increased by approximately 280% when the reaction temperature was increased from 84⁰C to 105⁰C. By raising the concentration of Na2S from 60 g/L to 100 g/L, the extraction of Sb was raised by a factor of 3 while increase in NaOH concentration from 30 g/L to 60 g/L enhanced the recovery by 140%. It was found that the leaching yield decreased by about 37% when the mineral particle size of the concentrate was increased from -53+38 µm to -106+75 µm. Under the selected leaching conditions, the estimated activation energy of tetrahedrite dissolution in the leaching reagent was 81 kJ/mol, which is indicative of a chemically controlled leach process. Characterisation of the leach residue by XRD and QEMSCAN proves that the alkaline sulphide lixiviant is selective and effective to dissolve the antimony and arsenic from the complex concentrate. The average crystal chemical formulae of the solid residue determined by QEMSCAN indicate the conversion of tetrahedrite into a new copper sulphide having stoichiometry of Cu1.64S. Tetrahedrite in the concentrate was reduced from 30.2% to 1.1% in the purified leach residue.Moreover, the results of electrowinning tests showed that the initial Na2S concentration had a significant influence on Sb deposition from this specific system. Current efficiency decreased remarkably when Na2S concentration was increased to 150 g/L. The test results indicated that the desired Na2S concentration should be less than 100 g/L. Faraday efficiency increased with increase in current density provided that the residual Sb concentration in the electrolyte remained above 20 g/L. Increase in NaOH concentration from 100 to 400 g/L raised the current efficiency by a factor of almost 1.5 while the specific energy requirement was reduced from 2.3 to 1.9 kWh/kg. Experimental results demonstrated that the specific energy decreased by almost 38% as the electrolyte temperature increased from 45 to 90⁰C and the optimum temperature should be between 50 and 75⁰C to reduce the heating cost. It was noted that polysulphide and thiosulphate had an adverse effect on Sb deposition. Current efficiency of the process decreased sharply from 83% to 32% when the polysulphide concentration was increased from 0 to 30 g/L; and at this polysulphide concentration, the specific energy was raised from 1.7 to 4.9 kWh/kg. Sparging of the electrolyte facilitates a smooth and adherent antimony deposit with an improved purity. The results from these experiments demonstrated that the anodic reactions were influenced by anodic current density and NaOH concentration. The molar concentration ratio between hydroxide and free sulphide ions must be ≥ 7.3 to produce appreciable amounts of sulphate in the electrolytic process. The amount of sulphate formed increased from 0.5 to 16.9 g/L when the anodic current density was increased from 500 to 2500 A/m2. By raising NaOH concentration from 100 to 400 g/L, the production of sulphate at the anode was enhanced by 6.2 g/L increment. However, the concentration of thiosulphate formed during the electrolysis decreased with increasing anode current density and NaOH concentration. The main factors influencing the purity of the antimony deposits were current density and NaOH concentration. Antimony purity was lowered from 99.9% to 99.2% when the current density was increased from 50 to 250 A/m2. Sparging of the electrolyte during the electrodeposition enhanced antimony purity by 0.4%. Finally, a simplified integrated hydro-/electro-metallurgical process flowsheet for antimony removal and recovery from Rockliden sulphide copper concentrate was developed. The experimental results from this investigation confirmed that different concentrations of Na2S and NaOH were needed at leaching and electrowinning stages to achieve an efficient process. 
  •  
28.
  • Axelsson, Katarina (författare)
  • Studies of auroral processes using optical methods
  • 2013
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The Aurora is a visual manifestation of the complex plasma processes that occur as the solar wind interacts with the Earth’s magnetosphere and ionosphere. Therefore, studies of the aurora can lead to better understanding of the near-Earth space environment and of fundamental physical processes.This thesis focuses on optical studies of the aurora, both ground-based observations using the Auroral Large Imaging System (ALIS) and measurements from instruments onboard the Japanese micro-satellite Reimei. Various properties of the aurora are studied, such as the characteristic energy of precipitating electrons and scale sizes of diffuse auroral structures. Our understanding of the ionospheric physical processes involved in a particular auroral emission is improved using conjugate particle and optical data.Auroral light is a result of radiative transitions between excited states of the ionospheric gases. These excited states are formed either by direct electron impact or by a series of more complicated processes, involving chemical reactions, where part of the energy is converted into auroral light. Studies of auroral emissions can therefore give information about primary particle fluxes, ionospheric composition, and the magnetospheric and ionospheric processes leading to auroral precipitation. One way of deducing the characteristic energy of the precipitating particles is by using intensity ratios of auroral emissions. To be reliable, this method requires a good understanding of the processes involved in the auroral emissions used. The method works well if the measurements are made along the geomagnetic field lines. Using data from ALIS, both in magnetic zenith and off magnetic zenith, this method is tested for angles further away from the direction of the magnetic field lines. The result shows that it is possible to use this technique to deduce the characteristic energy for angles up to 35 degrees away from magnetic zenith.Using ALIS we have also been able to study structures and variations in diffuse aurora. When mapped to the magnetosphere, this provides information about the characteristics of the modulating wave activity in the magnetospheric source region. A statistical study of the scale sizes of diffuse auroral structures was made and the result shows widths and separation between structures of the order of 13-14 km. When mapped to the magnetosphere, this corresponds to 3-4 ion gyro radii for protons with a typical energy of 7 keV. Magnetometer data show that the structures move southward with a speed close to zero in the plasma convection frame. Stationary mirror mode structures in the magnetospheric equatorial plane are a likely explanation for these diffuse auroral structures. In another study we use measured precipitating electron energy spectra to improve our understanding of how the auroral process itself relates to the 427.8 nm auroral emission, which is often used when studying intensity ratios between different emission lines. The 427.8 nm emission is a fairly simple emission to model, with only a few processes involved, but still has some uncertainties, mostly due to the excitation cross section. Simultaneous measurements of the intensity of this emission from ALIS and the intensity and electron flux from Reimei provide a way to evaluate different sets of cross sections in order to find the best fit to the experimental data. It also allows a comparison of the absolute calibration of ALIS and Reimei imagers, improving the possibility to use the space-borne data for other detailed quantitative studies.In order to compare absolute measurements of aurora using different imagers, optical instruments are usually absolute calibrated by exposing them to a calibration light source. In 2011 an intercalibration workshop was held in Sodankylä, Finland, where nine low light sources were compared to the radioactive Fritz Peak reference source. The results were compared with earlier calibration workshop results and show that the sources are fairly stable. Two sources were also calibrated with the calibration standard source at UNIS, Svalbard, and the results show agreement with the calibration workshop in Sodankylä within 15 to 25%. This confirms the quality of the measurements with ALIS and in turn also of the the Reimei imagers.
  •  
29.
  • Baart, Pieter (författare)
  • Grease lubrication mechanisms in bearing seals
  • 2011
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Rolling bearings contain seals to keep lubricant inside and contaminants outside the bearing system. These systems are often lubricated with grease; the grease acts as a lubricant for the bearing and seal and improves the sealing efficiency. In this thesis, the influence of lubricating grease on bearing seal performance is studied. Rheological properties of the grease, i.e. shear stress and normal stress difference, are evaluated and related to the lubricating and sealing performance of the sealing system. This includes the seal, grease and counterface. The grease velocity profile in the seal pocket in-between two sealing lips is dependent on the rheological properties of the grease. The velocity profile in a wide pocket is evaluated using a 1-dimensional model based on the Herschel-Bulkley model. The velocity profile in a narrow pocket, where the influence of the side walls on the velocity profile is significant, is measured using micro particle image velocimetry. Subsequently, the radial migration of contaminants into the seal pocket is modelled and related to the sealing function of the grease. Additionally, also migration in the axial direction is found in the vicinity of the sealing contact. Experimental results show that contaminant particles in different greases consistently migrate either away from the sealing contact or towards the sealing contact, also when the pumping rate of the seal can be neglected. Lubrication of the seal lip contact is dependent on several grease properties. A lubricant film in the sealing contact may be built up as in oil lubricated seals but normal stress differences in the grease within the vicinity of the contact may result in an additional lift force. The grease, which is being sheared in the vicinity of the contact, will also contribute to the frictional torque. It is important to maintain a lubricant film in the sealing contact to minimize friction and wear. Here the replenishment of oil separated from the grease, also referred to as oil bleed, is of crucial importance. A model is presented to predict this oil bleed based on oil flow through the porous grease thickener microstructure. The model is applied to an axial sealing contact and a prediction of the film thickness as a function of time is made. The work presented in the thesis gives a significant contribution to a better understanding of the influence of lubricating grease on the sealing system performance and seal lubrication conditions.
  •  
30.
  • Backén, Staffan (författare)
  • On dynamic array processing for GNSS software receivers
  • 2011
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis presents contributions in the field of satellite navigation with a focus on array processing and related implementation issues. For readers not familiar with GNSS, it also includes a brief overview of satellite navigation.Compared to the state of the art only ten years ago, modern GNSS receivers are very capable. One reason for this improvement is advances in the semiconductor industry that have increased both the available processing power and the energy efficiency. An active research community have also made important contributions resulting in more sophisticated algorithms. To improve receiver performance even further, auxiliary sensors such as gyros and accelerometers are becoming increasingly common. A related option involves using an antenna with several physical elements. This is known as an antenna array and is often used for radar, sonar and telecommunication applications. Array processing can also be used for GNSS and as such it is the primary focus of this thesis. An array allows for exploration of the spatial domain, in other words a receiver that may differentiate between signals depending on the direction of arrival. For GNSS, where interference and multipath (signal reflection off, for example, buildings or the ground) may be significant sources of error, this is an attractive solution. Although array processing have been the subject of extensive research efforts within other fields, there are several remaining issues with regards to how these techniques can be implemented in a GNSS receiver.With regards to array processing there are also properties unique to GNSS, such as multiple signal sources at known positions, that have not been explored sufficiently in previous efforts. In this thesis we show how these properties can be exploited to improve receiver performance in dynamic scenarios. In short, the orientation of the antenna platform is estimated accurately (typical variance around 1°) using beamforming techniques. This information is then used to achieve a better estimate of the radio environment by allowing for longer integration periods when estimating the covariance matrices. A better estimate of the covariance matrices directly translates into improved receiver performance, especially so in areas of moderate levels of multipath/interference.Further, a method to calibrate GNSS array antennas using real signals is investigated in detail. Instead of resorting to electromagnetic simulations that requires precise knowledge about the antenna and installation factors, or RF chamber measurement that is expensive, it is shown how the array antenna can be calibrated using live signals. The accuracy of the resulting model is verified using real data.Also, the first implementation of an RF record and replay system is presented. With such a system data can be recorded in a specific environment, generally a time consuming task, and later played back into the antenna input of any GNSS receiver. Such systems are nowadays commercially available and have proven very useful for testing and validation of GNSS receivers. Throughout the thesis, the required receiver architecture and practical viability of the proposed algorithms are considered.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 21-30 av 290
Typ av publikation
doktorsavhandling (290)
Typ av innehåll
övrigt vetenskapligt/konstnärligt (290)
Författare/redaktör
Wikberg-Nilsson, Åsa (1)
Larsson, Anders (1)
Berggård, Glenn (1)
Olsson, Malin (1)
Andersson, Tobias (1)
Eriksson, Lisbeth (1)
visa fler...
Andersson, Ulf (1)
Rönnberg, Sarah (1)
Wedberg, Dan (1)
Lundgren, Agneta (1)
Lindberg, Malin (1)
Olofsson, Jennie (1)
Grane, Camilla (1)
Forsberg, Lena (1)
Öhrling, Therese (1)
Berglund, Leif (1)
Slapak, Rikard (1)
Fatemi, Shahab (1)
Berglund, Anders (1)
Nilsson, Daniel (1)
Jacobsson, Lars (1)
Kjellberg, Anders (1)
Engström, Fredrik (1)
Johansson, Eva (1)
Lundbäck, Andreas (1)
Ager, Bengt (1)
Olsson, Göran, Profe ... (1)
Johansson, Daniel (1)
Sas, Gabriel (1)
Karlsson, Anna (1)
Cristovao, Luis (1)
Carlsson, Per (1)
Sjöblom, Magnus (1)
Nilsson, Mats, docen ... (1)
Lundgren, Maria (1)
Ahmadi, Alireza (1)
Kumar Verma, Ajit, P ... (1)
Rantatalo, Matti (1)
Arasteh Khouy, Iman (1)
Andersson, Karl (1)
Semberg, Pär (1)
Kalaykov, Ivan, Prof ... (1)
Brännvall, Evelina (1)
Calleecharan, Yogesh ... (1)
Nässelqvist, Mattias (1)
Lindkvist, Göran (1)
Åbom, Mats, Professo ... (1)
Noël, Maxime (1)
Linder, Tomas (1)
Awe, Samuel Ayowole (1)
visa färre...
Lärosäte
Luleå tekniska universitet (290)
Högskolan i Gävle (4)
Malmö universitet (2)
Högskolan Dalarna (2)
Umeå universitet (1)
Högskolan i Halmstad (1)
visa fler...
Högskolan Väst (1)
Linköpings universitet (1)
Chalmers tekniska högskola (1)
Högskolan i Borås (1)
RISE (1)
Marie Cederschiöld högskola (1)
visa färre...
Språk
Engelska (268)
Svenska (22)
Forskningsämne (UKÄ/SCB)
Teknik (194)
Naturvetenskap (45)
Samhällsvetenskap (36)
Medicin och hälsovetenskap (17)
Humaniora (4)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy