SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L4X0:0345 7524 srt2:(1990-1994)"

Sökning: L4X0:0345 7524 > (1990-1994)

  • Resultat 1-25 av 34
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Afghahi, Morteza, 1950- (författare)
  • Clocking of high speed CMOS VLSI systems
  • 1991
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Two consequences of high level circuit integration may be increasing cost of design and cost of interconnections. Interconnection is expensive in terms of silicon area, speed and power. As far as timing is concerned. schemes of future VLSI systems may not be a simple extension of those used in the existing LSI circuits. This is because the relative importance of clock skew increases as the MOS technology develops and interconnections become slower and slower. Clock skew makes the design of VLSI synchronous circuits ineffective, complicated and failure prone. For a synchronous scheme to be useful in the VLSI environment and enjoy the experiences gained by designers, it must develop to a fast, simple, structured and robust scheme.To alleviate the adverse effects of clock skew, technological as well as circuit techniques may be employed. We have proposed a technological solution. It is suggested that a special interconnection metal layer should be introduced into the VLSI process and used for long interconnections. It is shown that by using this technique, the interconnection delay will not be a limiting factor for the performance of synchronous systems.We have also considered circuit solutions. To this end, physical causes of clock skew are investigated. It is shown that even for optimized interconnections, traditional modes of clocking results in unacceptable time performance for high speed synchronous systems. Then a new mode of clocking is presented and analysed in detail. By using this mode of clocking, the performance of synchronous systems scales with scaling the minimum feature size of MOS transistors.We have developed a synchronous scheme that is structured, simple and general. These factors also make the CMOS systems well suited for design compilation. A circuit technique is proposed that makes the design of synchronous schemes robust. Performance of different asynchronous schemes in VLSI environment is also investigated. It is shown that synchronous schemes outperforms standard asynchronous schemes for a wide range of important applications.Finally, in order to test some of the developed rules and principles, a chip has been designed as an example. In this design, a new hardware algorithm is presented for sorting. This algorithm is based on bit-serial data processing. It is shown that this design can operate at a clock frequency determined by the computational module delays and not by the clock skew.
  •  
2.
  • Andersson, Mats T., 1960- (författare)
  • Controllable Multi-dimensional Filters and Models in Low-Level Computer Vision
  • 1992
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis concerns robust estimation of low-level features for use in computer vision systems. The presentation consists of two parts.The first part deals with controllable filters and models. A basis filter set is introduced which supports a computationally efficient synthesis of filters in arbitrary orientations. In contrast to many earlier methods, this approach allows the use of more complex models at an early stage of the processing. A new algorithm for robust estimation of orientation is presented. The algorithm is based on synthesized quadrature responses and supports the simultaneous representation and individual averaging of multiple events. These models are then extended to include estimation and representation of more complex image primitives such as as line ends, T-junctions, crossing lines and curvature. The proposed models are based on symmetry properties in the Fourier domain as well as in the spatial plane and the feature extraction is performed by applying the original basis filters directly on the grey-level image. The basis filters and interpolation scheme are finally generalized to allow synthesis of 3-D filters. The performance of the proposed models and algorithms is demonstrated using test images of both synthetic and real world data.The second part of the thesis concerns an image feature representation adapted for a robust analogue implementation. A possible use for this approach is in analogue VLSI or corresponding analogue hardware adapted for neural networks. The methods are based on projections of quadrature filter responses and mutual inhibition of magnitude signals.
  •  
3.
  • Andersson, Sören (författare)
  • On Dimension Reduction in Sensor Array Signal Processing
  • 1992
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • During the last decades, sensor array signal processing has been a very active research area. More recently, relations between many of the proposed methods has been examined. The problem of assessing the estimation accuracy of these methods has also been addressed. Realworld applications of these techniques involves spatial distribution of several sensors to be used for collecting measurements of interesting emitted waveforms. From the measurements, detection and localization as well as estimation of the emitted waveforms can be accomplished. Common examples of applications are radar (electromagnetic waveforms) and sonar (acoustical underwater waveforms).Another aspect of array processing that recently has been addressed in the literature is that of dimension reduction, where the data vectors collected at the sensor outputs are reduced in size. This reduction is employed mainly in order to lower the amount of computations necessary for obtaining the parameter-estimates of interest; hut some other improvcments has also been observed. These include, e.g., lower sensitivity to sensor noise correlations and, for some estimation methods, higher resolution capability.In this thesis, it is demonstrated how to make the dimension reduction in an optimal fashion, where the optimality is with respect to estimation accuracy. More precisely, an expression to be satisfied by a transformation matrix acting on the sensor outputs is derived , that preserves the optimally achievable estimation accuracy (the Cramer-Rao bound) also in the reduced space. A transformation matrix design method that tries to reduce some unwanted properties of the optimal transformation is also outlined and examined. This method is based on numerical optimization of a particular performance mea.sure, motivated by the insight obtained in the process of finding the optimal transformation.l\foreover, an asymptotic analysis is performed, using the reduced data vectors, that examines the estimation accuracy of several estimation methods when a !arge number of sensor elements is used. This analysis is valid for a fairly general transformation matrix, and the methods considered are the Weighted Subspace Fitting (WSF) and Noise Subspace Fitting (NSF) methods, including MUSIC. By employing the optimal transformation matrix, the WSF method is shown to to be efficient, i.e., to attain the Cramer-Rao bound. An examination of the estimation accuracy, compared to that optimally attainable, is performed for the case when the transformation matrix differs from the optimal one. Finally, an application is studied, considering the potential use of sensor arrays in mobile communication systems.
  •  
4.
  • Babic, Ankica, 1960- (författare)
  • Medical knowledge extraction : application of data analysis methods to support clinical decisions
  • 1993
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In building computer based clinical decision support extensive data analysis is sought to acquire all the medical knowledge needed to formulate the decision rules.This study explores, compares and discusses several approaches to knowledge extraction from medical data. Statistical methods (univariate, multivariate), probabilistic artificial intelligence approaches (inductive learning procedures, neural networks) and the rough sets were used for this purpose. The methods were applied in two clinical sets of data with well defined patients groups.The aim of the study was then to use different data analytical methods and extract knowledge, both of semantic and classification nature, enabling to differentiate among patients, observations and disease groups, what in turn was aimed to support clinical decisions.Semantic analysis was performed in two ways. In prior analysis subgroups or patterns were formed based on the distance within the data, while in posterior semantic analysis 'types' of observation falling into various groups and their measured values were explored.To study further discrimination, two empirical systems, based on principles of learning from examples, i.e. based on Quintan's ID3 algorithm (the AssPro system) and CART (Classification and Regression Trees), were compared. The knowledge representation in both systems is tree structured, thus the comparison is made according to the complexity, accuracy and structure of their optimal decision trees. The inductive learning system was additionaly compared and evaluated in relation to the location model of discriminant analysis, the linear Ficherian discrimination and the rough sets.All methods used were analysed and compared for their theoretical and applicative performances, and in some cases they were assessed medical appropriateness. By using them for the extensive knowledge extraction, it was possible to give a strong methodological basis for design of clinical decision support systems specific for the problem and the medical environments considered.
  •  
5.
  • Birch, Jens (författare)
  • Single-crystal Mo/V superlattices : growth, structure, and hydrogen uptake
  • 1994
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Fundamental studies concerning the growth, structural characterization and hydrogen uptake of single-crystal (00 l )-oriented Mo/V superlattices have been performed. The superlattices were grown by dual-target magnetron sputtering in pure Ar-atmosphere < 6·10-3 Torr on (001)-oriented MgO substrates. X-ray diffraction (XRD), X-ray and neutron reflectivity, high resolution (HR) as well as ordinary crosssectional transmission electron microscopy (XTEM) and selected area electron diffraction (SAED) were used for the structural characterization. Hydrogen depth-profiling was performed by the 15N method.For growth of periodic Mo/V superlattices, it is shown that substrate temperatures in the range of 600-700 °C is feasible for epitaxy. At higher growth temperatures substantial interdiffusion occurred. Furthermore, simulations of XRDpatterns gave the width of the interfaces to be ±1 monolayer (±0,154 nm) which was confirmed by XRD and HRXTEM analyses of a superlattice grown with layer thicknesses DMo=Dv=0,31 nm (2 monolayers). A transition from smooth to wavy V-layers was found to occur at a critical V-layer thickness Dc. In superlattices where the relative amount of V is large, De is large and vice versa for superlattices containing thin V-layers. In superlattices with equally thick Mo- and V-layers Dc was found to be ~2,5 nm. Mo was found to grow with a uniform thickness following the surface of the V-layers. The layer thickness fluctuations are non-accumulative and disappear if the periodicity of a growing Mo/V superlattice is changed so that Dv becomes smaller than Dc. The origin of the 3D evolution is explained in terms of surface strain and the roughening transition. The interfaces of Mo/V superlattices grown under the influence of energetic ion bombardment ranging from about 15 eV to 250 eV was studied by HRXTEM and XRD. Both techniques indicated a continous deterioration of the interface quality and an increasing amount of defects with increasing ion energy.The diffraction peaks from a clas of quasi-periodic superlattices which can be generated by the inflation rules A→AmB, B→A (m = positive integer) was analytically, experimentally and numerically found to be located at the wavevectors q = 2πɅ-1rγ(m)k where r and k are integers and A is an average superlattice period. The ratios, γ(m), between the thicknesses of the two superlattice building blocks, A and B, must be chosen such that γ(m) = (m + (m2 + 4) 1/2 )/2.The uptake of hydrogen in the superlattices is found to decrease with decreasing A and for ≤5,5 nm the transition between α-VHx and β-VHx is not observed. A model is proposed which explains the A-dependent behaviour of the hydrogen uptake by a transfer of interstitial electrons from Mo to V, creating a 0,49 nm wide H-free interface layer. The existence of this layer is shown both by the 15N method performed on samples containing several A:s and by combining simulations of X-ray and neutron reflectivities with measurements on superlattices loaded with either hydrogen or deuterium. The structural change of Mo/V(OOl) superlattices upon H-loading was measured by a method derived in this work which utilises a combination of X-ray reflectivity and reciprocal space mapping by XRD. The lattice parameters in the layers are measured in the growth direction as well a in the plane of the sample. It is found that the V lattice expands in the growth direction and that the hydrogenation process is associated with relaxation of coherency strain.
  •  
6.
  • Björn, Anders, 1966- (författare)
  • Removable singularities for Hardy spaces of analytic functions
  • 1994
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Let Ω ⊂  C be a non-empty open connected set and O < p < ∞. Define Hp(Ω) to be the set of all analytic functions f in Ω such that there exists a harmonic function uf in Ω with |f|p  < uf· Let K ⊂ Ω be compact and such that Ω ∖, K is connected. Then the set K is said to be a removable singularity for Hp(Ω ∖ K) if Hp(Ω)= Hp(Ω  ∖ K). Hejhal proved in 1973 that this notion does not depend on Ω.In this thesis we give a survey of the theory of removable singularities for Hardy spaces. We use potential theory, conformal mappings, harmonic measures and Banach space techniques to give new results. One new result is that if dim K > min {1, p} then K is not removable. Several theorems about removability of sets lying on rectifiable curves and also conditions for removability of some planar self-similar Cantor sets are given.
  •  
7.
  • Bårman, Håkan (författare)
  • Hierarchical curvature estimation in computer vision
  • 1991
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis concerns the estimation and description of curvature for computer vision applications. Different types of multi-dimensional data are considered: images (2D); volumes (3D); time sequences of images (3D); and time sequences of volumes (4D).The methods are based on local Fourier domain models and use local operations such as filtering. A hierarchical approach is used. Firstly, the local orientation is estimated and represented with a vector field equivalent description. Secondly, the local curvature is estimated from the orientation description. The curvature algorithms are closely related to the orientation estimation algorithms and the methods as a whole give a unified approach to the estimation and description of orientation and curvature. In addition, the methodology avoids thresholding and premature decision making.Results on both synthetic and real world data are presented to illustrate the algorithms performance with respect to accuracy and noise insensitivity. Examples illustrating the use of the curvature estimates for tasks such as image enhancement are also included.
  •  
8.
  • Chowdhury, Shamsul I., 1949- (författare)
  • Computer-based support for knowledge extraction from clinical databases
  • 1990
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis is devoted to aspects related to the analysis and interpretation of medical data thus allowing knowledge extraction from medical databases. The difficulties with traditional approaches for data analysis and interpretation by people less proficient in statistics are well known. An attempt has been made to identify different stages in the processes of data analysis and interpretation where this category of users might need help and to identify how this help could best be provided (studies I, II, III). In this work artificial intelligence approaches have been used as remedies to improve user-friendliness as well as power and effectiveness of statistical software in respect to data analysis strategies. Prototype implementations based on these approaches are presented and discussed. Issues pertaining to the evaluation of decision support systems m medicine have been identified and discussed in detail (study IV).Knowledge in a knowledge-based system for decision support is generally acquired from experts and the literature. Knowledge can also be effectively extracted from a database of patient observations and from interpretation of those observations. The resulting system would be more accurate in the latter case, especially if it is intended to operate in decision support in the same clinical setting.The studies V and VI were conducted to show how retrospectively collected data could be utilized for the purpose of knowledge extraction. More traditional data analysis approaches (study V) were used to analyze a database on liver diseases. The data material used in the study was collected in the HELP system as a routine part of patient care. The main issues involved were detection of outliers and treatment of missing values in order to facilitate utilization of this kind of database for eventual knowledge extraction. In Study V, statistical techniques including discriminant analysis and artificial intelligence approaches such asinductive learning, were used. The 'K nearest neighbor' technique was found to be an easy and acceptable method for estimating missing values when the database contained only a few missing values for each object in the database. Discriminant analysis was found to be a good method for classifying a patient, based on a set of variables, into two or more disease classes. The results show that when discriminant analysis was applied to two groups based on a relatively large number (19) of variables, then only a few (3) of the variables accounted for a high percentage of correct classifications.The knowledge-based approach for data analysis and interpretation used in (study III) was applied to a large database (study VI). The main emphasis was to study the feasibility of the approach in exploring a large patient record system. The data material was taken from Kronan Health Center - a primary health care center in suburban Stockholm with a patient database consisting of about 14,000 medical records. The analysis was carried out to test the hypothesis of a possible causation between hypertension and diabetes. The results of this study support the assumption that there is a relationship between diabetes and hypertension but the question of the direction of this relationship remained unsolved, as did the question of direct causality. On the other hand, the results of this study are in accordance with the hypothesis of a common metabolic syndrome. The results arrived at by the analysis method (multivariate tabular analysis) utilized by the system are, moreover in accordance with another statistical method (log linear analysis). This also supports the approach taken in the knowledge-based system.
  •  
9.
  • Dahlbäck, Nils, 1949- (författare)
  • Representations of discourse : cognitive and computational aspects
  • 1991
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This work is concerned with empirical studies of cognitive and computational aspects of discourse representations. A more specific aim is to contribute to the development of natural language interfaces for interaction with computers, especially the development of representations making possible a continuous interactive dialogue between user and system.General issues concerning the relationship between human cognitive and computational aspects of discourse representations are studied through an empirical and theoretical analysis of a psychological theory of discourse coherence, the theory of mental models. The analysis suggests that there are principled limits to what workers in computational linguistics can learn from psychological work on discourse processing.As far as the theory of mental models as a psychologica1 theory of discourse is concerned, the effect of previous background knowledge of the domain of discourse on the processing of the types of texts often used in previous work is demonstrated. It is argued that this demonstration does not invalidate any of the basic assumptions of the theory, but should rather be seen as a modification or clarification. An attempt is also made to study the possible existence of different cognitive strategies used by different subjects and in different tasks.While some supporting evidence for this can be seen, it is argued that the results-obtained are not conclusive on this issue.Another set of studies use the so-called Wizard-of-Oz method, i.e. dialogues with simulated natural language interfaces. Here the focus of the analysis is on the dialogue structure, and on the use of referring and co-referring expressions in the dialogues. The basic result of thedialogue analysis is that it is possible to describe these kinds of dialogues using a dialogue grammar, the LINDA-model, the basic feature of which is the partitioning of dialogues in a number of initiative-response (IR) units. The study of referring expressions also shows a lack of some of the complexities encountered in human dialogues. The results point to the possibility of using computationally simpler methods than what has hitherto been assumed, both for the dialogue management and for the resolution of anaphoric references.
  •  
10.
  • Doherty, Patrick, 1957- (författare)
  • NML3 : a non-monotonic formalism with explicit defaults
  • 1991
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The thesis is a study of a particular approach to defeasible reasoning based on the notion of an information state consisting of a set of partial interpretations constrained by an information ordering. The formalism proposed, called NML3, is a non-monotonic logic with explicit defaults and is characterized by the following features: (1) The use of the strong Kleene three-valued logic as a basis. (2) The addition of an explicit default operator which enables distinguishing tentative conclusions from ordinary conclusions in the object language. (3) The use of the technique of preferential entailment to generate non-monotonic behavior. The central feature of the formalism, the use of an explicit default operator with a model theoretic semantics based on the notion of a partial interpretation, distinguishes NML3 from the existing formalisms. By capitalizing on the distinction between tentative and ordinary conclusions, NML3 provides increased expressibility in comparison to many of the standard non-monotonic formalisms and greater flexibility in the representation of subtle aspects of default reasoning.In addition to NML3, a novel extension of the tableau-based proof technique is presented where a signed formula is tagged with a set of truth values rather than a single truth value. This is useful if the tableau-based proof technique is to be generalized to apply to the class of multi-valued logics. A refutation proof procedure may then be used to check logical consequence for the base logic used in NML3 and to provide a decision procedure for the propositional case of NML3.A survey of a number of non-standard logics used in knowledge representation is also provided. Various formalisms are analyzed in terms of persistence properties of formulas and their use of information structures.
  •  
11.
  • Eidenvall, Lars E. J., 1965- (författare)
  • Cardiovascular modelling and ultrasound heart flow quantification : aortic flow and mitral regurgitation
  • 1993
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The primary objective of this thesis was to model and simulate aortic flow and mitral regurgitation and to improve quantitative ultrasound measurements. The tools used were; theoretical analysis, computer simulation, model experiments, image analysis and clinical evaluation.The flow in the aorta is known to be influenced by both cardiac function and vascular characteristics. The influence of vascular characteristics were investigated in a three parameter windkessel model. Peak aortic velocity and acceleration were studied when these parameters were changed. The results indicate that aortic peak flow velocity is related to the compliance of the arterial system while the peak flow acceleration is inversely related to the characteristic impedance of the aorta and large vessels.To obtain a correct aortic flow velocity profile from a two dimensional colour flow echocardiographic investigation, a unit which incrementally delayed the ECG signal was designed and used to control the ultrasound scanning. By combining velocity data from incrementally delayed images in a software program, a time corrected profile was obtained.In order to determine regurgitant heart valve flow volume, the intensity of the ultrasound continuous wave signal has been suggested as a potential method. Measurements in a hydraulic model showed, however, that the intensity of the signal was, in addition to volume, also related to peak velocity, measuring angle and machine settings. Hence, conclusions drawn about regurgitant grade from the intensity signal require caution.Another method for determination of valve regurgitation is to study the laminar and nondisturbed flow in the region of acceleration proximal to the valve, normally the distance from orifice to the first aliased velocity. This was tested first in a steady flow model using colour M mode and colour 2D information, and later in a pulsatile flow model. Four different methods using velocity data from the entire reconstructed 2D velocity vector field were investigated. Model experiments and error calculation showed that flow was best determined by integrating velocities along hemi-spherical lines in two perpendicular planes within an angle of ±45° from the orifice centre line at a distance of approximately 1.2 to 1.4 times the orifice diameter, corresponding to velocities between 0.15 and 0.45 m/s. By combining 2D flow and spectral velocity data, regurgitant volume could be estimated for both circular, diagonal and crescent orifices to within + 15 to -11% from true volume.
  •  
12.
  • Forsman, Krister (författare)
  • Constructive Commutative Algebra in Nonlinear Control Theory
  • 1991
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis consists of two parts. The first part is a short review of those results from commutative algebra, algebraic geometry and differential algebra that are needed in this work. The emphasis is on constructive methods. The second part contains applications of these methods to topics in control theory, mainly nonlinear systems.When studying nonlinear control systems it is common to consider C00 affine systems, with differential geometry as a mathematical framework. If instead we regard systems where all nonlinearities are polynomial the most natura] tool is commutative algebra. One of the most important constructive methods in commutative algebra is Gröbner bases, which are used for several different purposes in this thesis.Conversion from state space realization to input-output form is one of the problems solved. Furthermore, transformations between different realizations of the same input-output equation can be found with the help of elimination theory. Testing algebraic observability is also possible to do with Gröbner bases.In Lyapunov theory the classical problem of finding the critical level of a local Lyapunov function of a system is addressed. Some variants of this problem are used to analyze the stability and performance robustness of a system subject to structured uncertainty. Similarly, it is possible to make the optimal choice of parameters in a family of local Lyapunov functions for a given system.A problem that has not been solved entirely constructively is the question of when a polynomial input-output differential equation has a rational state space realization. This problem is showed to be strictly harder than the one of determining unirationality of hypersurfaces, which is an open question in algebraic geometry.
  •  
13.
  • Gustafsson, Fredrik (författare)
  • Estimation of Discrete Parameters in Linear Systems
  • 1992
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Linear models are the by far most commonly used approach for describing physical signals and systems. As a result, the theory of linear models is quite extensive in areas like control theory and signal processing. However, in many applications a linear model is adequate only if some information of discrete nature is available. This thesis addresses the problem of how to treat and estimate this discrete information. Several applications on the problem of estimating discrete parameters in linear systems from measurements of its output are covered in the literature. Typically, the problem is treated separately in each context without overlapping. Orre objective of the present work is to discuss this problem in a general framework, relating proposed methods at a higher leve! of abstraction for revealing the main ideas.Estimators for discrete parameters are quite complex to compute, due to the fact that essentially every possible value of the discrete parameter has to be examined separately. The key question in applications is to find feasible expressions for the estimators, either exact or approximate. Besides the general discussion of the problem, the thesis contains more detailed treatments of four applications: mode! structure selection, detection, segmentation and blind equalization. The main topics covered in the applications consider the computation of optimal estimates, analysis and practical recursive schemes. Several statistical optimality criteria are examined, and a number of efficient computation schemes are presented. In the analysis, questions such as detectability, identifiability and efficiency are addressed. It is also investigated how different prior assumptions infiuence the estimates, and how their effects can be reduced by giving as non-informative priors as possible. This leads to estimation algorithms which contain almost no design variables.
  •  
14.
  • Haglund, Leif (författare)
  • Adaptive Multidimensional Filtering
  • 1991
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis contains a presentation and an analysis of adaptive filtering strategies for multidimensional data. The size, shape and orientation of the flter are signal controlled and thus adapted locally to each neighbourhood according to a predefined model. The filter is constructed as a linear weighting of fixed oriented bandpass filters having the same shape but different orientations. The adaptive filtering methods have been tested on both real data and synthesized test data in 2D, e.g. still images, 3D, e.g. image sequences or volumes, with good results. In 4D, e.g. volume sequences, the algorithm is given in its mathematical form. The weighting coefficients are given by the inner products of a tensor representing the local structure of the data and the tensors representing the orientation of the filters.The procedure and lter design in estimating the representation tensor are described. In 2D, the tensor contains information about the local energy, the optimal orientation and a certainty of the orientation. In 3D, the information in the tensor is the energy, the normal to the best ftting local plane and the tangent to the best fitting line, and certainties of these orientations. In the case of time sequences, a quantitative comparison of the proposed method and other (optical flow) algorithms is presented.The estimation of control information is made in different scales. There are two main reasons for this. A single filter has a particular limited pass band which may or may not be tuned to the different sized objects to describe. Second, size or scale is a descriptive feature in its own right. All of this requires the integration of measurements from different scales. The increasing interest in wavelet theory supports the idea that a multiresolution approach is necessary. Hence the resulting adaptive filter will adapt also in size and to different orientations in different scales.
  •  
15.
  • Hjalmarsson, Håkan (författare)
  • Aspects in Incomplete Modeling in System Identification
  • 1993
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis considers several aspects of the system identification problem. The major issues, however, are how one should represent and assess mode! errors. These issues have come to be central in system identification in recent years - mainly due to their relevance for robust control.The random error is the deviation between models based on finite data and infinite data. A measure of this error is the covariance matrix of the parameter estimate. An explicit expression for this covariance matrix has been known for long but, except under very restrictive assumptions, a consistent estimate has been missing. We derive two, conceptually different, simple, explicit and consistent estimation methods.A somewhat controversial issue in identification is whether disturbances should be considered as deterministic or stochastic. We show that, for prediction error methods or correlation methods, the input can be used to force a deterministic disturbance to behave like a stochastic disturbance from an identification point of view.Current mode! error problem formulations are somewhat ill-defined; it is in principle always possible to estimate the system perfectly asymptotically. We analyze a, perhaps more realistic, problem formulation which prevents this from being possible.When experimental data is sparse it is important to be able to incorporate prior knowledge of the system. A Bayesian procedure, suitable for transfer function estimation, is developed for this purpose.Mode! validation is another important issue. A standard procedure is to use statistical so called correlation tests. Previously it has been noticed that these tests may give levels of significance that differ from the desired ones. We show that it is possible to modify these tests so as to avoid this artifact.Spectral analysis is widely used for identification and time-series analysis. The method has previously been thoroughly analyzed from a quadratic mean point of view and here we complement this analysis with an almost sure convergence analysis.To support the analysis of the aforementioned methods a stochastic framework is developed. Byproducts of this are extensions of some convergence results for prediction error methods and instrumental variable methods.
  •  
16.
  • Klein, Inger (författare)
  • Automatic Synthesis of Sequential Control Schemes
  • 1993
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Of all hard- and software developed for industrial control purposes, the majority is devoted to sequential, or binary valued, control and only a minor part to classical linear control. Typically, the sequential parts of the controller are invoked during startup and shut-down to bring the system into its normal operating region and into some safe standby region, respectively. Despite its importance, fairly little theoretical research has been devoted to this area, and sequential control programs are therefore still created manually without much theoretical support to obtain a systematic approach.We propose a method to create sequential control programs automatically. The main idea is to spend some eort off-line modelling the plant, and from this model generate the control strategy, that is the plan. The plant is modelled using action structures, thereby concentrating on the actions instead of the states of the plant. In general the planning problem shows exponential complexity in the number of state variables. However, by focusing on the actions, we can identify problem classes as well as algorithms such that the planning complexity is reduced to polynomial complexity. We prove that these algorithms are sound, i.e., the generated solution will solve the stated problem, and complete, i.e., if the algorithms fail, then no solution exists. The algorithms generate a plan as a set of actions and a partial order on this set specifying the execution order. The generated plan is proven to be minimal and maximally parallel.For a larger class of problems we propose a method to split the original problem into a number of simpler problems that can each be solved using one of the presented algorithms. It is also shown how a plan can be translated into a GRAFCET chart, and to illustrate these ideas we have implemented a planning tool, i.e., a system that is able to automatically create control schemes. Such a tool can of course also be used on-line if it is fast enough. This possibility opens up completely new applications such as operator supervision and simplied error recovery and restart procedures after a plant fault has occurred.Additionally we analyze reachability for a restricted class of problems. For this class we state a reachability criterion that may be checked using a slightly modified version of one of the above mentioned algorithms.
  •  
17.
  • Lindahl, Olof (författare)
  • The impression technique for assessment of tissue oedema : instrumentation, evaluation and applications
  • 1993
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • An instrument for clinical noninvasive assessment of tissue oedema based on an impression method was developed. The method measures and evaluates the decaying force, due to translocation of tissue fluid, during mechanical compression of any site of tissue. We applied the impression method on physical models, animal models, and patients. Significant parameters for the assessment of tissue oedema that estimated tissue fluid translocation and tissue pressure could be derived from the registered impression force curves.Accuracy was determined theoretically and reproducibility was estimated on plastic foam. We described the clinical procedure for the instrument, and preliminary results from patients with chronic pitting oedema showed that the instrument detected larger fluid translocation on oedematous sites than on non-oedematous sites. We concluded that the instrument was acceptable for accurate measurements on biological tissue.Evaluation was performed in a rat testis model in which testicular interstitial fluid volume could be changed both artificially by 30-min infusions of different fluids with different fluid resistance properties, and pharmacologically by administration of hormones. We found that tissue pressure increased with infused fluid volume, and changes as small as 16 μl (7 % of total testis interstitial fluid volume) could be detected. Fluid translocation changed depending upon the infused fluid's resistance properties. Hormone-induced changes in rat-testis oedema altered both fluid translocation and tissue pressure. Discrete changes in vascular permeability were monitored.Investigation of generalised oedema in patients suffering from burn injury showed that tissue fluid translocation increased up to a maximum value after 6 days postburn and declined thereafter. We found tissue pressure to be relatively high during the first 7 days postburn as compared with 3-week postburn values. Force curve analysis suggested a flux of water-like fluid from the vasculature to the interstitial space during the first 6 days postburn. The course of postburn tissue swelling could be followed and estimated with the impression technique.Comparison with a new tactile sensor that measured physical properties of soft tissue showed that both methods detected changes in silicone hardness/softness and in hormone-induced changes of rat-testis interstitial fluid. We concluded that impression force estimated hardness of soft tissue, which can be helpful when investigating hardness of oedematous tissue.
  •  
18.
  • Lindberg, Lars-Göran, 1951- (författare)
  • Photoplethysmography : methodological studies and applications
  • 1991
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Photoplethysmography (PPG), an optical non-invasive technique for measuring skin perfusion changes, was investigated and evaluated. The method was compared with laser Doppler flowmetry (LDF) when measuring perfusion changes in human fingers and forearms. The results showed that both PPG and LDF reflect changes in skin perfusion under certain conditions related to the sample volume of PPG. This volume is in turn related to the wavelength and the probe type used. In addition, an increased sensitivity to perfusion changes can, in some vascular beds, be obtained when using a shorter wavelength (560 nm). The DC-coupled PPG does not provide a consistent measure of skin perfusion changes induced by local temperature variations.Optical properties of blood in motion were studied in perfusion models. Light reflection and transmission was found to be dependant on several parameters such as blood volume, viscosity, red blood cell orientation and haematocrit. The results also indicate that a shorter wavelength can be used to extract information about blood volume changes, whereas a longer wavelength primarily reflects changes in red cell orientation.A fibre optic sensor for monitoring of respiratory- and heart rates from the same fibre optic probe has been developed. Blood perfusion changes, synchronous with the heart- and the respiration rates, were measured by analysing reflected light from the skin surface. The new sensor has the advantages of being totally non-invasive. It can be positioned anywhere on the skin surface and is also insensitive to electromagnetic disturbances.Pulse oximetry utilizes the PPG signal for monitoring of the arterial oxygen saturation. The pulse oximeter signal was studied under various blood flow conditions, simulated in an in vitro model. The results indicate that blood flow conditions may affect the accuracy of the instrument.
  •  
19.
  • Liu, Dake (författare)
  • Low power digital CMOS design
  • 1994
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This Dissertation describes research on Low Power Digital CMOS Design performed at the LSI Design Center, Department of Physics and Measurement Technology, Linkoping University, Linkoping, Sweden. The research covers Low Power CMOS Device Design, Low Power Circuit and System Technique and Power Estimations in Digital CMOS VLSI Chips.Interest in Low Power digital CMOS VLSI has increased strongly in the 90s. One reason is that the scaling of digital CMOS has led to a very large power consumption per chip. Another reason is the increased need for portable or mobile products.General low power strategies are reviewed in the first part, entitled Low Power Digital CMOS Design, of the dissertation. The review covers low power digital CMOS design from the system level, circuit level and device level point of view.In the second part of the dissertation, low power through low supply voltage has been investigated on device level, including supply voltage bounds, opportunities for power reduction without speed loss, and temperature and process induced deviations. Two main arguments against ultra low supply voltage have been discussed in this dissertation. One is the temperature induced deviation, another is the global process parameter sensitivity. The main contributions in the second part are:1. For the first time, the lower limit of Vddfor static and dynamicCMOS logic is investigated.2. For the first time, the possibility of decreasing supply voltageand maintaining high speed by threshold voltage and processoptimization is demonstrated. Two examples are 40 times and 4.8times power reduction by process optimization without speed lossfor static and dynamic logic respectively.3. The possibility of choosing process, V dd, and Vth givingtemperature independent circuit speed of digital CMOS has beendemonstrated.4. It has been demonstrated that highly reliable digital CMOSTechnology including freedom of latch-up and hot electroninduced life time reduction is achieved at low supply voltages.The three supply voltage values, one that eliminates latch-up, one that avoids hot electron induced life time reduction and one that gives temperature independent speed, are close to each other. This strongly motivates a decrease of supply voltage for digital CMOS. Combining the findings from 1 to 4, an optimized digital CMOS technology is proposed in this dissertation. It could support both static and dynamic logic, with low power, high speed, without latch-up problems, and with long life time.Before this work, there was no general description of power distribution in a digital CMOS chip. We need to know how power consumption is distributed among different parts of the chip in order to direct low power design. Therefore, in the third part of the dissertation, a method for estimation and comparison is discussed. An estimation tool has been developed and power estimation and comparison examples are given in the third part of the dissertation. Two contributions are:5. A power estimation tool has been developed. The tool has beenused to demonstrate the prospects of power savings in CMOS ]Cs.6. Power consumption has been compared for different circuitstyles, layout styles, and architectures.The estimation tool provides a general view of the power distribution in digital CMOS chips. This gives directions for further low power design research. The power consumption comparisons offer suggestions for digital low power design of highly pipelined and other kinds of systems.
  •  
20.
  • Mridha, Mannan, 1954- (författare)
  • Characterization of cutaneous oedema : modelling and mesurements methods
  • 1990
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Three different methods have been developed to characterize oedema. The methods can be applied on any site of the skin to study the effect of oedema on the mechanical properties of the skin. Changes in the viscoelastic properties of the oedematous tissues after treatment can be monitored by these methods. Thus effectiveness of the treatment, and improvement of elastic and viscoelastic properties of the oedematous tissues can be determined by the methods described.Mechanical Impedance (MI) describes viscoelastic properties of tissues subjected to external deformation. Differences in the MI between normal and oedematous tissues were greater at lower frequencies. Mechanical pulse wave propagation, MPWP was measured in gel, normal and oedematous tissues. The MPWP depended on the density of the gel and on the elasticity and viscosity of the tissues, and differed from oedematous to nonoedematous tissues.An instrument was developed which rapidly compressed the skin and recorded the force, which decreased as the fluid translocated. Normal subjects and oedematous patients showed marked differences in the pattern of fluid translocation. The flow rate and the total volume flow varied with the degree of oedema. Force curves obtained from patients, who were given pneumatic compression treatment for unilateral post-mastectomy lymphoedema were analysed. Mathematical expressions for the curves were found to define the degree of oedema.An analogue electrical model which demonstrates the viscoelastic nature of oedematous tissue under compression has been proposed.The model consists of the electrical counterparts of multicompartment arrangements of springs and dashpots.The model elements were assigned values from experimental results. The models were tested by a computer simulation and the results show viscoelastic behaviour similar to that obtained from the measurement of fluid translocation in tissue under compression.
  •  
21.
  • Nagy, Peter A. J. (författare)
  • Tools for Knowledge-Based Signal Processing with Applications to System Identification
  • 1992
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis contains three parts: firstly there is an introduction to the field of knowledge-based signal processing - which denotes a combination of symbolic and numeric software capable of implementing algorithms of both algorithmic and heuristic character. Secondly suitable tools for knowledge-based signal processing are discussed - this includes the tools developed in our research. Finally we present our applications of knowledge-based signal processing.The research in computer science and especially in the area of artificial intelligence (AI) has provided useful techniques and tools - e.g. expert systems - for "implementing" heuristic knowledge. This is hard to express in conventional programming languages like Fortran, Pascal and C.To make it easy to implement knowledge-based signal processing applications, tools integrating numerical and symbolical computations (including logic inference) are needed: The tools developed in the undertaken research contains integrated building blocks: Common Lisp with YAPS (or Scheme) - the main programming language of the developed tool extended with an expert system shell, MATLAB - standard numeric software, and MACSYMA - a computer algebra system.The applications are directed toward intelligent support system for system identification software. Two different help systems are discussed and implemented, one for help in black-box modeling, and one for help with physical modeling. The latter system uses bond graphs to describe systems. Bond graphs give a graphical description of physical systems which allow for modeling multiple domains in a form which makes a good compromise between a high level description of physical systems and easy computation of the underlying differential equations.
  •  
22.
  • Nilsson, Katarina (författare)
  • Cost-Effective Industrial Energy Systems : Multiperiod Optimization of Operating Strategies and Structural Choices
  • 1993
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • It is of great importance to encourage the development of cost-effective industrial energy systems as the potential for saving energy and capital in industrial applications often is substantial. The MIND method has been developed for multi-period cost optimization of industrial energy systems. The optimal operating strategy of the industrial utility and production systems in co-operation can be found. Existing equipment units can be represented as well as new equipment structures. The representation of process units is performed in a way that facilitates the analysis of processes from various industrial branches. The production processes can be represented at the desired level of accuracy, i.e. one modelling unit may represent an equipment or a whole process line, Parts of the process system may then be represented with a higher accuracy. Since both energy and material flows are included, the interaction between the utility system and the production system can be studied. Nonlinear relations, found in expressions of energy demand, energy conversion efficiency and investment cost, are linearized in mixedinteger linear programming. A flexible time scale facilitates the performance of long and short term analyses. The optional time scale allows variations in boundary and process conditions to be represented. The MIND method has been applied to several industrial energy systems. The optimal operating strategy of a pulp and paper mill and a refinery showed opportunities of considerable capital savings. In the case of the refinery, possibilities for energy recovery measures, calculated by Pinch Technology, were also included in the optimization. Calculations show that MIND can be combined with other analysis methods and that the combination yields new insights in the total energy system. In this thesis, the introduction and the literature survey of related work are followed by a description of the method and a chapter of comments to the enclosed papers. The following papers are included and will be referred to in the text (I - VI): (I) Nilsson, K. and Soderstrom, M. "Optimizing the operating strategy of a Pulp and Paper Mill using the MIND method", Energy- The International Journal, Vol. 17, No. 10, pp. 945 - 953, 1992. https://doi.org/10.1016/0360-5442(92)90043-Y (II) Nilsson, K., Soderstrom, M. and Karlsson, B.G. "MIND optimization reduces the system cost at a Refinery", Accepted for publication in Energy - The International Journal, 1993. (III) Nilsson, K. and Soderstrom, M. "Industrial applications of production planning with optimal electricity demand", Applied Energy, Vol. 46, No. 2, pp. 181-192, 1993. https://doi.org/10.1016/0306-2619(93)90067-Y(IV) Nilsson, K. "Industrial production planning with optimal electricity cost", Energy Convers. Mgmt., Vol. 34, No. 3, pp. 153 - 158, 1993. https://doi.org/10.1016/0196-8904(93)90131-S(V) Nilsson, K. and Sunden, B. "A combined method for the thermal and structural design of industrial energy systems", Recent Advances in Heat Transfer (Eds. B. Sunden and A. Zukauskas), 1017-1024, Elsevier Science Publisher, Amsterdam, 1992. https://libris.kb.se/bib/4952922(VI) Nilsson, K. and Sunden, B. "Optimizing a Refinery using the Pinch Technology and the MIND method", Accepted for publication in Heat Recovery Systems & CHP, 1993. https://doi.org/10.1016/0890-4332(94)90011-6
  •  
23.
  • Nilsson, Ulf, 1961- (författare)
  • Abstract interpretations and abstract machines : contributions to a methodology for the implementation of logic programs
  • 1992
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Because of the conceptual gap between high-level logic programming languages and existing hardware, the problem of compilation is hard. This thesis addresses two ways of narrowing this gap – program analysis through abstract interpretation and the introduction of intermediate languages and abstract machines.By means of abstract interpretations it is possible to infer program properties which are not explicitly represented in the program – properties that can be used by a compiler to generate specialized code. We describe a framework for constructing and computing abstract interpretations of logic programs with equality. The core of the framework is an abstract interpretation called the base interpretation which provides a model of the run-time behaviour of the program. The model characterized by the base interpretation consists of the set of all reachable computation states of a transition system specifying an operational semantics reminiscent of SLD-resolution. This model is in general not effectively computable. However, the base interpretation can be used for constructing new abstract interpretations which approximate this model. Our base interpretation combines both a simple and concise formulation. with the ability of inferring a wide range of program properties. The framework supports a variety of computation strategies including, in particular, efficient computing of approximate models using a chaotic iteration strategy.We also show that abstract interpretations may form a basis for implementation of deductive data bases. We relate the magic templates approach to bottom-up evaluation of deductive databases with the base interpretation of C. Mellish and prove that they not only specify isomorphic models but also that the computations which lead up to those models are isomorphic. This implies that methods (for instance, evaluation and transformation techniques) which are applicable in one of the fields are also applicable in the other. As a side-effect we are also able to relate so-called "top-down" and "bottom-up" abstract interpretations.Abstract machines and intermediate languages are often used to bridge the conceptual gap between language and hardware. Unfortunately – because of the way they are presented – it is often difficult to see the relationship between the high-level and intermediate language. In the final part of the thesis we propose a methodology for designing abstract machines for logic programming languages in such a way that much of the relationship is preserved throughout the process. Using partial deduction and other transformation techniques, a source program and an interpreter are "compiled" into a new program consisting of "machine code" for the source program and an abstract machine for the machine code. Based upon the appearance of the abstract machine the user may choose to modify the interpreter and repeat the process until the abstract machine reaches a suitable level of abstraction. We demonstrate how these techniques can be applied to derive several of the control instructions of Warren's Abstract Machine, thus complementing previous work by P. Kursawe who reconstructed several of the unification instructions using similar techniques.
  •  
24.
  • Nordberg, Klas (författare)
  • Signal Representation and Processing using Operator Groups
  • 1994
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis presents a signal representation in terms of operators. The signal is assumed to be an element of a vector space and subject to transformations of operators. The operators form continuous groups, so-called Lie groups. The representation can be used for signals in general, in particular if spatial relations are undefinied and it does not require a basis of the signal space to be useful.Special attention is given to orthogonal operator groups which are generated by anti-Hermitian operators by means of the exponential mapping. It is shown that the eigensystem of the group generator is strongly related to properties of the corresponding operator group. For one-parameter orthogonal operator groups, a phase concept is introduced. This phase can for instance be used to distinguish between spatially even and odd signals and, therefore, corresponds to the usual phase for multi-dimensional signals.Given one operator group that represents the variation of the signal and one operator group that represents the variation of a corresponding feature descriptor, an equivariant mapping maps the signal to the descriptor such that the two operator groups correspond. Suficient conditions are derived for a general mapping to be equivariant with respect to a pair of operator groups. These conditions are expressed in terms of the generators of the two operator groups. As a special case, second order homo-geneous mappings are considered, and examples of how second order mappings can be used to obtain different types of feature descriptors are presented, in particular for operator groups that are homomorphic to rotations in two and three dimensions, respectively. A generalization of directed quadrature lters is made. All feature extraction algorithms that are presented are discussed in terms of phase invariance.Simple procedures that estimate group generators which correspond to one-parameter groups are derived and tested on an example. The resulting generator is evaluated by using its eigensystem in implementations of two feature extraction algorithms. It is shown that the resulting feature descriptor has good accuracy with respect to the corresponding feature value, even in the presence of signal noise.
  •  
25.
  • Shahsavar, Nosrat, 1951- (författare)
  • Design, implementation and evaluation of a knowledge-based system to support ventilator therapy management
  • 1993
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • A good deal of research has been directed toward developing medical knowledge-based systems to assist the health care provider in both diagnostic and management decisions. Different studies by various groups have resulted in clinically useful knowledge-based systems with great potential for supporting decision-making, but due to a lack of methodology in the clinical integration and evaluation of these systems, only a few of them are in regular use. This thesis is devoted to the following aspects of knowledge-based system development:Knowledge Representarion: Representing knowledge and mimicking the decision making behavior of domain experts is a central problem in the development of medical knowledge-based systems. The chosen representation scheme should cover all pieces of the domain knowledge. Reusability and shareability of the knowledge are other desirable features, since the development of useful knowledge bases is a very time and cost consuming process.Knowledge Acquisition: There are a variety of knowledge acquisition techniques, but all techniques are not suitable for all domains. A successful outcome requires comprehensive knowledge in the knowledge-base. The quality of expert knowledge in the knowledge-base determines the quality of the system.Knowledge-Base Maintenance: Verification and maintenance of the knowledge-base by the domain experts themselves is the main issue here. Since knowledge develops continuously, it is mandatory that the knowledge-base is updateable. The knowledge-base contents should be correct and free from redundancy and inconsistency.lntegration: Integrating the system into the real environment, particularly with real patient data, constitutes a critical step, because prototype environments often differ from the clinical setting. A knowledge-based system can hardly be useful if it cannot be integrated with other applications in the real environment.Evaluation: The evaluation of medical decision-support systems is important, and it is also difficult because there is no generally accepted methodology for carrying out this evaluation. The major aspect in the evaluation of a medical knowledge-based system is to find out whether the system is safe and legal, and to study the impact of the system on patients and the organization.This thesis examines and discusses the aforementioned factors based on experiences from the design, development, implementation and evaluation of VentEx, a knowledge-based decisionsupport and monitoring system we have built and applied in ventilator therapy. Our experience covers the whole development process from the prototype to an integrated on-line system. A hybrid knowledge representation has been used and a domain-specific knowledge acquisition tool (KAVE) equipped with a simulator has been developed. Real patient data has been used to validate the knowledge-base and a study to measure the impact of the system is ongoing. Evaluation results indicate a high consensus between the doctors and VentEx according to a "gold " standard.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-25 av 34
Typ av publikation
doktorsavhandling (34)
Typ av innehåll
övrigt vetenskapligt/konstnärligt (34)
Författare/redaktör
Granlund, Gösta, Pro ... (5)
Hjalmarsson, Håkan (1)
Westin, Carl-Fredrik ... (1)
Gustafsson, Fredrik (1)
Söderström, Mats (1)
Afghahi, Morteza, 19 ... (1)
visa fler...
Nilsson, Katarina (1)
Dahlbäck, Nils, 1949 ... (1)
Wårdell, Karin, 1959 ... (1)
Birch, Jens (1)
Svensson, Johan (1)
Simonsson, Kjell, 19 ... (1)
Lindahl, Olof (1)
Svensson, Tommy (1)
Babic, Ankica, 1960- (1)
Ljung, Lennart, Prof ... (1)
Andersson, Mats T., ... (1)
Doherty, Patrick, 19 ... (1)
Nordberg, Klas (1)
Maluszynski, Jan (1)
Andersson, Sören (1)
Shahsavar, Nosrat, 1 ... (1)
Liu, Dake (1)
Björn, Anders, 1966- (1)
Strömberg, Jan-Erik (1)
Lindberg, Lars-Göran ... (1)
Sundgren, Jan-Erik, ... (1)
Clemens, Bruce M, Dr ... (1)
Essén, Matts, Profes ... (1)
Nilsson, Ulf, 1961- (1)
Bårman, Håkan (1)
Haglund, Leif (1)
Klein, Inger (1)
Chowdhury, Shamsul I ... (1)
Eidenvall, Lars E. J ... (1)
Forsman, Krister (1)
Wimmerstedt, Roland, ... (1)
Mridha, Mannan, 1954 ... (1)
Nagy, Peter A. J. (1)
Debray, Samya (1)
Sjöström, Sören, Dr. (1)
Berveiller, Marcel, ... (1)
Watkins, G. D, Profe ... (1)
Tjärnström, Robert, ... (1)
Waldén, Bertil, 1961 ... (1)
Per-Åke, Wedin, Prof ... (1)
Wang Chen, Ke (1)
visa färre...
Lärosäte
Linköpings universitet (34)
Luleå tekniska universitet (1)
Språk
Engelska (34)
Forskningsämne (UKÄ/SCB)
Teknik (11)
Naturvetenskap (6)
Samhällsvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy