SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Villani Mattias) "

Sökning: WFRF:(Villani Mattias)

  • Resultat 1-50 av 89
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  •  
3.
  • Aad, G., et al. (författare)
  • Studies of the performance of the ATLAS detector using cosmic-ray muons
  • 2011
  • Ingår i: European Physical Journal C. Particles and Fields. - : Springer Science and Business Media LLC. - 1434-6044 .- 1434-6052. ; 71:3
  • Tidskriftsartikel (refereegranskat)abstract
    • Muons from cosmic-ray interactions in the atmosphere provide a high-statistics source of particles that can be used to study the performance and calibration of the ATLAS detector. Cosmic-ray muons can penetrate to the cavern and deposit energy in all detector subsystems. Such events have played an important role in the commissioning of the detector since the start of the installation phase in 2005 and were particularly important for understanding the detector performance in the time prior to the arrival of the first LHC beams. Global cosmic-ray runs were undertaken in both 2008 and 2009 and these data have been used through to the early phases of collision data-taking as a tool for calibration, alignment and detector monitoring. These large datasets have also been used for detector performance studies, including investigations that rely on the combined performance of different subsystems. This paper presents the results of performance studies related to combined tracking, lepton identification and the reconstruction of jets and missing transverse energy. Results are compared to expectations based on a cosmic-ray event generator and a full simulation of the detector response.
  •  
4.
  • Aad, G., et al. (författare)
  • The ATLAS Simulation Infrastructure
  • 2010
  • Ingår i: European Physical Journal C. Particles and Fields. - : Springer Science and Business Media LLC. - 1434-6044 .- 1434-6052. ; 70:3, s. 823-874
  • Tidskriftsartikel (refereegranskat)abstract
    • The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, and the validation of the simulated output against known physics processes.
  •  
5.
  • Abramian, David, 1992-, et al. (författare)
  • Anatomically Informed Bayesian Spatial Priors for FMRI Analysis
  • 2020
  • Ingår i: ISBI 2020. - : IEEE. - 9781538693308
  • Konferensbidrag (refereegranskat)abstract
    • Existing Bayesian spatial priors for functional magnetic resonance imaging (fMRI) data correspond to stationary isotropic smoothing filters that may oversmooth at anatomical boundaries. We propose two anatomically informed Bayesian spatial models for fMRI data with local smoothing in each voxel based on a tensor field estimated from a T1-weighted anatomical image. We show that our anatomically informed Bayesian spatial models results in posterior probability maps that follow the anatomical structure.
  •  
6.
  • Adolfson, Malin, et al. (författare)
  • Bayesian analysis of DSGE models : Some comments
  • 2007
  • Ingår i: Econometric Reviews. - : Informa UK Limited. - 0747-4938 .- 1532-4168. ; 26:2-4, s. 173-185
  • Tidskriftsartikel (refereegranskat)abstract
    • Sungbae An and Frank Schorfheide have provided an excellent review of the main elements of Bayesian inference in Dynamic Stochastic General Equilibrium (DSGE) models. Bayesian methods have, for reasons clearly outlined in the paper a very natural role to flay in DSGE analysis, and the appeal of the Bayesian paradigm is indeed strongly evidenced by the flood of empirical applications in the area over the last couple of years. We expect their paper to be the natural starting point for applied economists interested in learning about Bayesian techniques for analyzing DSGE models, and as such the paper is likely to have a strong influence on what will be considered best practice for estimating DSGE models. The authors have, for good reasons, chosen a stylized six-equation model to present the methodology. We shall use here the large-scale model in Adolfson et al. (2005), henceforth ALLV, to illustrate a few econometric problems which we have found to be especially important as the size of the model increases. The model in ALLV is an open economy extension of the closed economy model in Christiano et al. (2005). It consists of 25 log-linearized equations, which can be written as a state space representation With 60 state variables, many of them unobserved. Fifteen observed unfiltered time series are used to estimate 51 structural parameters. An additional complication compared to the model in An and Schorfheide's paper is that some of the coefficients in the measurement equation are non-linear functions of the structural parameters. The model is currently the main vehicle for policy analysis at Sveriges Riksbank (Central Bank of Sweden) and similar models are being developed in many other policy institutions, which testifies to the model's practical relevance. The version considered here is estimated on Euro area data over the period 1980Q1-2002Q4. We refer to ALLV for details.
  •  
7.
  • Adolfson, Malin, et al. (författare)
  • Bayesian estimation of an open economy DSGE model with incomplete pass-through
  • 2007
  • Ingår i: Journal of International Economics. - : Elsevier BV. - 0022-1996 .- 1873-0353. ; 72:2, s. 481-511
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, we develop a dynamic stochastic general equilibrium (DSGE) model for an open economy, and estimate it on Euro area data using Bayesian estimation techniques. The model incorporates several open economy features, as well as a number of nominal and real frictions that have proven to be important for the empirical fit of closed economy models. The paper offers: i) a theoretical development of the standard DSGE model into an open economy setting, ii) Bayesian estimation of the model, including assessments of the relative importance of various shocks and frictions for explaining the dynamic development of an open economy, and iii) an evaluation of the model's empirical properties using standard validation methods.
  •  
8.
  • Adolfson, Malin, et al. (författare)
  • Empirical properties of closed- and open-economy DSGE models of the Euro area
  • 2008
  • Ingår i: Macroeconomic Dynamics. - 1365-1005 .- 1469-8056. ; 12, s. 2-19
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, we compare the empirical proper-ties of closed- and open-economy DSGE models estimated on Euro area data. The comparison is made along several dimensions; we examine the models in terms of their marginal likelihoods, forecasting performance, variance decompositions, and their transmission mechanisms of monetary policy.
  •  
9.
  • Adolfson, Malin, et al. (författare)
  • Forecasting performance of an open economy DSGE model
  • 2007
  • Ingår i: Econometric Reviews. - : Informa UK Limited. - 0747-4938 .- 1532-4168. ; 26:04-feb, s. 289-328
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1-2002Q4. We compare the DSGE model and a few variants of this model to various reduced form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and, two different Bayesian approaches, and traditional benchmark models, e.g., the random. walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in, an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between, rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.
  •  
10.
  • Andersson, Olov, 1979- (författare)
  • Learning to Make Safe Real-Time Decisions Under Uncertainty for Autonomous Robots
  • 2020
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Robots are increasingly expected to go beyond controlled environments in laboratories and factories, to act autonomously in real-world workplaces and public spaces. Autonomous robots navigating the real world have to contend with a great deal of uncertainty, which poses additional challenges. Uncertainty in the real world accrues from several sources. Some of it may originate from imperfect internal models of reality. Other uncertainty is inherent, a direct side effect of partial observability induced by sensor limitations and occlusions. Regardless of the source, the resulting decision problem is unfortunately computationally intractable under uncertainty. This poses a great challenge as the real world is also dynamic. It  will not pause while the robot computes a solution. Autonomous robots navigating among people, for example in traffic, need to be able to make split-second decisions. Uncertainty is therefore often neglected in practice, with potentially catastrophic consequences when something unexpected happens. The aim of this thesis is to leverage recent advances in machine learning to compute safe real-time approximations to decision-making under uncertainty for real-world robots. We explore a range of methods, from probabilistic to deep learning, as well as different combinations with optimization-based methods from robotics, planning and control. Driven by applications in robot navigation, and grounded in experiments with real autonomous quadcopters, we address several parts of this problem. From reducing uncertainty by learning better models, to directly approximating the decision problem itself, all the while attempting to satisfy both the safety and real-time requirements of real-world autonomy.
  •  
11.
  • Andersson, Olov, 1979- (författare)
  • Methods for Scalable and Safe Robot Learning
  • 2017
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Robots are increasingly expected to go beyond controlled environments in laboratories and factories, to enter real-world public spaces and homes. However, robot behavior is still usually engineered for narrowly defined scenarios. To manually encode robot behavior that works within complex real world environments, such as busy work places or cluttered homes, can be a daunting task. In addition, such robots may require a high degree of autonomy to be practical, which imposes stringent requirements on safety and robustness. \setlength{\parindent}{2em}\setlength{\parskip}{0em}The aim of this thesis is to examine methods for automatically learning safe robot behavior, lowering the costs of synthesizing behavior for complex real-world situations. To avoid task-specific assumptions, we approach this from a data-driven machine learning perspective. The strength of machine learning is its generality, given sufficient data it can learn to approximate any task. However, being embodied agents in the real-world, robots pose a number of difficulties for machine learning. These include real-time requirements with limited computational resources, the cost and effort of operating and collecting data with real robots, as well as safety issues for both the robot and human bystanders.While machine learning is general by nature, overcoming the difficulties with real-world robots outlined above remains a challenge. In this thesis we look for a middle ground on robot learning, leveraging the strengths of both data-driven machine learning, as well as engineering techniques from robotics and control. This includes combing data-driven world models with fast techniques for planning motions under safety constraints, using machine learning to generalize such techniques to problems with high uncertainty, as well as using machine learning to find computationally efficient approximations for use on small embedded systems.We demonstrate such behavior synthesis techniques with real robots, solving a class of difficult dynamic collision avoidance problems under uncertainty, such as induced by the presence of humans without prior coordination. Initially using online planning offloaded to a desktop CPU, and ultimately as a deep neural network policy embedded on board a 7 quadcopter.
  •  
12.
  • Andersson, Olov, 1979-, et al. (författare)
  • Real-Time Robotic Search using Structural Spatial Point Processes
  • 2020
  • Ingår i: 35TH UNCERTAINTY IN ARTIFICIAL INTELLIGENCE CONFERENCE (UAI 2019). - : Association For Uncertainty in Artificial Intelligence (AUAI). ; , s. 995-1005
  • Konferensbidrag (refereegranskat)abstract
    • Aerial robots hold great potential for aiding Search and Rescue (SAR) efforts over large areas, such as during natural disasters. Traditional approaches typically search an area exhaustively, thereby ignoring that the density of victims varies based on predictable factors, such as the terrain, population density and the type of disaster. We present a probabilistic model to automate SAR planning, with explicit minimization of the expected time to discovery. The proposed model is a spatial point process with three interacting spatial fields for i) the point patterns of persons in the area, ii) the probability of detecting persons and iii) the probability of injury. This structure allows inclusion of informative priors from e.g. geographic or cell phone traffic data, while falling back to latent Gaussian processes when priors are missing or inaccurate. To solve this problem in real-time, we propose a combination of fast approximate inference using Integrated Nested Laplace Approximation (INLA), and a novel Monte Carlo tree search tailored to the problem. Experiments using data simulated from real world Geographic Information System (GIS) maps show that the framework outperforms competing approaches, finding many more injured in the crucial first hours.
  •  
13.
  • Dahlin, Johan, 1986-, et al. (författare)
  • Approximate inference in state space models with intractable likelihoods using Gaussian process optimisation
  • 2014
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • We propose a novel method for MAP parameter inference in nonlinear state space models with intractable likelihoods. The method is based on a combination of Gaussian process optimisation (GPO), sequential Monte Carlo (SMC) and approximate Bayesian computations (ABC). SMC and ABC are used to approximate the intractable likelihood by using the similarity between simulated realisations from the model and the data obtained from the system. The GPO algorithm is used for the MAP parameter estimation given noisy estimates of the log-likelihood. The proposed parameter inference method is evaluated in three problems using both synthetic and real-world data. The results are promising, indicating that the proposed algorithm converges fast and with reasonable accuracy compared with existing methods.
  •  
14.
  • Dang, Khue-Dung, et al. (författare)
  • Hamiltonian Monte Carlo with Energy Conserving Subsampling
  • 2019
  • Ingår i: Journal of machine learning research. - : MIT Press. - 1532-4435 .- 1533-7928. ; 20, s. 1-31
  • Tidskriftsartikel (refereegranskat)abstract
    • Hamiltonian Monte Carlo (HMC) samples efficiently from high-dimensional posterior distributions with proposed parameter draws obtained by iterating on a discretized version of the Hamiltonian dynamics. The iterations make HMC computationally costly, especially in problems with large data sets, since it is necessary to compute posterior densities and their derivatives with respect to the parameters. Naively computing the Hamiltonian dynamics on a subset of the data causes HMC to lose its key ability to generate distant parameter proposals with high acceptance probability. The key insight in our article is that efficient subsampling HMC for the parameters is possible if both the dynamics and the acceptance probability are computed from the same data subsample in each complete HMC iteration. We show that this is possible to do in a principled way in a HMC-within-Gibbs framework where the subsample is updated using a pseudo marginal MH step and the parameters are then updated using an HMC step, based on the current subsample. We show that our subsampling methods are fast and compare favorably to two popular sampling algorithms that use gradient estimates from data subsampling. We also explore the current limitations of subsampling HMC algorithms by varying the quality of the variance reducing control variates used in the estimators of the posterior density and its gradients.
  •  
15.
  •  
16.
  • Eklund, Anders, et al. (författare)
  • A Bayesian Heteroscedastic GLM with Application to fMRI Data with Motion Spikes
  • 2017
  • Ingår i: NeuroImage. - : Elsevier. - 1053-8119 .- 1095-9572. ; 155, s. 354-369
  • Tidskriftsartikel (refereegranskat)abstract
    • We propose a voxel-wise general linear model with autoregressive noise and heteroscedastic noise innovations (GLMH) for analyzing functional magnetic resonance imaging (fMRI) data. The model is analyzed from a Bayesian perspective and has the benefit of automatically down-weighting time points close to motion spikes in a data-driven manner. We develop a highly efficient Markov Chain Monte Carlo (MCMC) algorithm that allows for Bayesian variable selection among the regressors to model both the mean (i.e., the design matrix) and variance. This makes it possible to include a broad range of explanatory variables in both the mean and variance (e.g., time trends, activation stimuli, head motion parameters and their temporal derivatives), and to compute the posterior probability of inclusion from the MCMC output. Variable selection is also applied to the lags in the autoregressive noise process, making it possible to infer the lag order from the data simultaneously with all other model parameters. We use both simulated data and real fMRI data from OpenfMRI to illustrate the importance of proper modeling of heteroscedasticity in fMRI data analysis. Our results show that the GLMH tends to detect more brain activity, compared to its homoscedastic counterpart, by allowing the variance to change over time depending on the degree of head motion.
  •  
17.
  • Eklund, Anders, et al. (författare)
  • BROCCOLI : Software for fast fMRI analysis on many-core CPUs and GPUs
  • 2014
  • Ingår i: Frontiers in Neuroinformatics. - : Progressive Frontiers Press. - 1662-5196. ; 8:24
  • Tidskriftsartikel (refereegranskat)abstract
    • Analysis of functional magnetic resonance imaging (fMRI) data is becoming ever more computationally demanding as temporal and spatial resolutions improve, and large, publicly available data sets proliferate. Moreover, methodological improvements in the neuroimaging pipeline, such as non-linear spatial normalization, non-parametric permutation tests and Bayesian Markov Chain Monte Carlo approaches, can dramatically increase the computational burden. Despite these challenges, there do not yet exist any fMRI software packages which leverage inexpensive and powerful graphics processing units (GPUs) to perform these analyses. Here, we therefore present BROCCOLI, a free software package written in OpenCL (Open Computing Language) that can be used for parallel analysis of fMRI data on a large variety of hardware configurations. BROCCOLI has, for example, been tested with an Intel CPU, an Nvidia GPU, and an AMD GPU. These tests show that parallel processing of fMRI data can lead to significantly faster analysis pipelines. This speedup can be achieved on relatively standard hardware, but further, dramatic speed improvements require only a modest investment in GPU hardware. BROCCOLI (running on a GPU) can perform non-linear spatial normalization to a 1 mm3 brain template in 4–6 s, and run a second level permutation test with 10,000 permutations in about a minute. These non-parametric tests are generally more robust than their parametric counterparts, and can also enable more sophisticated analyses by estimating complicated null distributions. Additionally, BROCCOLI includes support for Bayesian first-level fMRI analysis using a Gibbs sampler. The new software is freely available under GNU GPL3 and can be downloaded from github (https://github.com/wanderine/BROCCOLI/).
  •  
18.
  • Eklund, Anders, et al. (författare)
  • Harnessing graphics processing units for improved neuroimaging statistics
  • 2013
  • Ingår i: Cognitive, Affective, & Behavioral Neuroscience. - : Springer. - 1530-7026 .- 1531-135X. ; 13:3, s. 587-597
  • Tidskriftsartikel (refereegranskat)abstract
    • Simple models and algorithms based on restrictive assumptions are often used in the field of neuroimaging for studies involving functional magnetic resonance imaging, voxel based morphometry, and diffusion tensor imaging. Nonparametric statistical methods or flexible Bayesian models can be applied rather easily to yield more trustworthy results. The spatial normalization step required for multisubject studies can also be improved by taking advantage of more robust algorithms for image registration. A common drawback of algorithms based on weaker assumptions, however, is the increase in computational complexity. In this short overview, we will therefore present some examples of how inexpensive PC graphics hardware, normally used for demanding computer games, can be used to enable practical use of more realistic models and accurate algorithms, such that the outcome of neuroimaging studies really can be trusted.
  •  
19.
  • Giordani, Paolo, et al. (författare)
  • Forecasting macroeconomic time series with locally adaptive signal extraction
  • 2010
  • Ingår i: International Journal of Forecasting. - : Elsevier BV. - 0169-2070 .- 1872-8200. ; 26:2, s. 312-325
  • Tidskriftsartikel (refereegranskat)abstract
    • We introduce a non-Gaussian dynamic mixture model for macroeconomic forecasting. The locally adaptive signal extraction and regression (LASER) model is designed to capture relatively persistent AR processes (signal) which are contaminated by high frequency noise. The distributions of the innovations in both noise and signal are modeled robustly using mixtures of normals. The mean of the process and the variances of the signal and noise are allowed to shift either suddenly or gradually at unknown locations and unknown numbers of times. The model is then capable of capturing movements in the mean and conditional variance of a series, as well as in the signal-to-noise ratio. Four versions of the model are estimated by Bayesian methods and used to forecast a total of nine quarterly macroeconomic series from the US, Sweden and Australia. We observe that allowing for infrequent and large parameter shifts while imposing normal and homoskedastic errors often leads to erratic forecasts, but that the model typically forecasts well if it is made more robust by allowing for non-normal errors and time varying variances. Our main finding is that, for the nine series we analyze, specifications with infrequent and large shifts in error variances outperform both fixed parameter specifications and smooth, continuous shifts when it comes to interval coverage.
  •  
20.
  • Giordani, Paolo, et al. (författare)
  • Taking the Twists into Account: Predicting Firm Bankruptcy Risk with Splines of Financial Ratios
  • 2014
  • Ingår i: Journal of financial and quantitative analysis. - : Cambridge University Press (CUP): HSS Journals. - 0022-1090 .- 1756-6916. ; 49:4, s. 1071-1099
  • Tidskriftsartikel (refereegranskat)abstract
    • We demonstrate improvements in predictive power when introducing spline functions to take account of highly nonlinear relationships between firm failure and leverage, earnings, and liquidity in a logistic bankruptcy model. Our results show that modeling excessive nonlinearities yields substantially improved bankruptcy predictions, on the order of 70%-90%, compared with a standard logistic model. The spline model provides several important and surprising insights into nonmonotonic bankruptcy relationships. We find that low-leveraged as well as highly profitable firms are riskier than those given by a standard model, possibly a manifestation of credit rationing and excess cash-flow volatility.
  •  
21.
  • Gu, Xuan, et al. (författare)
  • Bayesian Diffusion Tensor Estimation with Spatial Priors
  • 2017
  • Ingår i: CAIP 2017. - Cham : Springer International Publishing. - 9783319646893 - 9783319646886 ; , s. 372-383
  • Konferensbidrag (refereegranskat)abstract
    • Spatial regularization is a technique that exploits the dependence between nearby regions to locally pool data, with the effect of reducing noise and implicitly smoothing the data. Most of the currently proposed methods are focused on minimizing a cost function, during which the regularization parameter must be tuned in order to find the optimal solution. We propose a fast Markov chain Monte Carlo (MCMC) method for diffusion tensor estimation, for both 2D and 3D priors data. The regularization parameter is jointly with the tensor using MCMC. We compare FA (fractional anisotropy) maps for various b-values using three diffusion tensor estimation methods: least-squares and MCMC with and without spatial priors. Coefficient of variation (CV) is calculated to measure the uncertainty of the FA maps calculated from the MCMC samples, and our results show that the MCMC algorithm with spatial priors provides a denoising effect and reduces the uncertainty of the MCMC samples.
  •  
22.
  • Gustafsson, Oskar, 1990-, et al. (författare)
  • Bayesian optimization of hyperparameters from noisy marginal likelihood estimates
  • 2023
  • Ingår i: Journal of applied econometrics (Chichester, England). - : Wiley. - 0883-7252 .- 1099-1255. ; 38:4, s. 577-595
  • Tidskriftsartikel (refereegranskat)abstract
    • Bayesian models often involve a small set of hyperparameters determined by maximizing the marginal likelihood. Bayesian optimization is an iterative method where a Gaussian process posterior of the underlying function is sequentially updated by new function evaluations. We propose a novel Bayesian optimization framework for situations where the user controls the computational effort and therefore the precision of the function evaluations. This is a common in econometrics where the marginal likelihood is often computed by Markov chain Monte Carlo or importance sampling methods. The new acquisition strategy gives the optimizer the option to explore the function with cheap noisy evaluations and therefore find the optimum faster. The method is applied to estimating the prior hyperparameters in two popular models on US macroeconomic time series data: the steady-state Bayesian vector autoregressive (BVAR) and the time-varying parameter BVAR with stochastic volatility.
  •  
23.
  •  
24.
  •  
25.
  • Islam, Md Mafijul, et al. (författare)
  • Towards benchmarking of functional safety in the automotive industry
  • 2013
  • Ingår i: Lecture Notes in Computr Science. - Berlin, Heidelberg : Springer Berlin Heidelberg. - 1611-3349 .- 0302-9743. - 9783642387883 ; , s. 111-125
  • Konferensbidrag (refereegranskat)abstract
    • Functional safety is becoming increasingly important in the automotive industry to deal with the growing reliance on the electrical and/or electronic (E/E) systems and the associated complexities. The introduction of ISO 26262, a new standard for functional safety in road vehicles, has made it even more important to adopt a systematic approach of evaluating functional safety. However, standard assessment methods of benchmarking functional safety of automotive systems are not available as of today. This is where the BeSafe (Benchmarking of Functional Safety) project comes into the picture. BeSafe project aims to lay the foundation for benchmarking functional safety of automotive E/E systems. In this paper, we present a brief overview of the project along with the benchmark targets that we have identified as relevant for the automotive industry, assuming three abstraction layers (model, software, hardware). We then define and discuss a set of benchmark measures. Next, we propose a benchmark framework encompassing fault/error models, methods and the required tool support. This paper primarily focuses on functional safety benchmarking from the Safety Element out of Context (SEooC) viewpoint. Finally, we present some preliminary results and highlight potential future works.
  •  
26.
  •  
27.
  • Jonsson, Leif, et al. (författare)
  • Automatic Localization of Bugs to Faulty Components in Large Scale Software Systems using Bayesian Classification
  • 2016
  • Ingår i: 2016 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY (QRS 2016). - : IEEE. - 9781509041275 ; , s. 425-432
  • Konferensbidrag (refereegranskat)abstract
    • We suggest a Bayesian approach to the problem of reducing bug turnaround time in large software development organizations. Our approach is to use classification to predict where bugs are located in components. This classification is a form of automatic fault localization (AFL) at the component level. The approach only relies on historical bug reports and does not require detailed analysis of source code or detailed test runs. Our approach addresses two problems identified in user studies of AFL tools. The first problem concerns the trust in which the user can put in the results of the tool. The second problem concerns understanding how the results were computed. The proposed model quantifies the uncertainty in its predictions and all estimated model parameters. Additionally, the output of the model explains why a result was suggested. We evaluate the approach on more than 50000 bugs.
  •  
28.
  • Josefsson, Maria, 1979- (författare)
  • Attrition in Studies of Cognitive Aging
  • 2013
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Longitudinal studies of cognition are preferred to cross-sectional stud- ies, since they offer a direct assessment of age-related cognitive change (within-person change). Statistical methods for analyzing age-related change are widely available. There are, however, a number of challenges accompanying such analyzes, including cohort differences, ceiling- and floor effects, and attrition. These difficulties challenge the analyst and puts stringent requirements on the statistical method being used.The objective of Paper I is to develop a classifying method to study discrepancies in age-related cognitive change. The method needs to take into account the complex issues accompanying studies of cognitive aging, and specifically work out issues related to attrition. In a second step, we aim to identify predictors explaining stability or decline in cognitive performance in relation to demographic, life-style, health-related, and genetic factors.In the second paper, which is a continuation of Paper I, we investigate brain characteristics, structural and functional, that differ between suc- cessful aging elderly and elderly with an average cognitive performance over 15-20 years.In Paper III we develop a Bayesian model to estimate the causal effect of living arrangement (living alone versus living with someone) on cog- nitive decline. The model must balance confounding variables between the two living arrangement groups as well as account for non-ignorable attrition. This is achieved by combining propensity score matching with a pattern mixture model for longitudinal data.In paper IV, the objective is to adapt and implement available impu- tation methods to longitudinal fMRI data, where some subjects are lost to follow-up. We apply these missing data methods to a real dataset, and evaluate these methods in a simulation study.
  •  
29.
  •  
30.
  • Li, Feng, 1984- (författare)
  • Bayesian Modeling of Conditional Densities
  • 2013
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis develops models and associated Bayesian inference methods for flexible univariate and multivariate conditional density estimation. The models are flexible in the sense that they can capture widely differing shapes of the data. The estimation methods are specifically designed to achieve flexibility while still avoiding overfitting. The models are flexible both for a given covariate value, but also across covariate space. A key contribution of this thesis is that it provides general approaches of density estimation with highly efficient Markov chain Monte Carlo methods. The methods are illustrated on several challenging non-linear and non-normal datasets.In the first paper, a general model is proposed for flexibly estimating the density of a continuous response variable conditional on a possibly high-dimensional set of covariates. The model is a finite mixture of asymmetric student-t densities with covariate-dependent mixture weights. The four parameters of the components, the mean, degrees of freedom, scale and skewness, are all modeled as functions of the covariates. The second paper explores how well a smooth mixture of symmetric components can capture skewed data. Simulations and applications on real data show that including covariate-dependent skewness in the components can lead to substantially improved performance on skewed data, often using a much smaller number of components. We also introduce smooth mixtures of gamma and log-normal components to model positively-valued response variables. In the third paper we propose a multivariate Gaussian surface regression model that combines both additive splines and interactive splines, and a highly efficient MCMC algorithm that updates all the multi-dimensional knot locations jointly. We use shrinkage priors to avoid overfitting with different estimated shrinkage factors for the additive and surface part of the model, and also different shrinkage parameters for the different response variables. In the last paper we present a general Bayesian approach for directly modeling dependencies between variables as function of explanatory variables in a flexible copula context. In particular, the Joe-Clayton copula is extended to have covariate-dependent tail dependence and correlations. Posterior inference is carried out using a novel and efficient simulation method. The appendix of the thesis documents the computational implementation details.
  •  
31.
  • Li, Feng, 1984-, et al. (författare)
  • Efficient Bayesian Multivariate Surface Regression
  • 2013
  • Ingår i: Scandinavian Journal of Statistics. - : Wiley-Blackwell. - 0303-6898 .- 1467-9469. ; 40:4, s. 706-723
  • Tidskriftsartikel (refereegranskat)abstract
    • Methods for choosing a fixed set of knot locations in additive spline models are fairly well established in the statistical literature. The curse of dimensionality makes it nontrivial to extend these methods to nonadditive surface models, especially when there are more than a couple of covariates. We propose a multivariate Gaussian surface regression model that combines both additive splines and interactive splines, and a highly efficient Markov chain Monte Carlo algorithm that updates all the knot locations jointly. We use shrinkage prior to avoid overfitting with different estimated shrinkage factors for the additive and surface part of the model, and also different shrinkage parameters for the different response variables. Simulated data and an application to firm leverage data show that the approach is computationally efficient, and that allowing for freely estimated knot locations can offer a substantial improvement in out-of-sample predictive performance.
  •  
32.
  • Li, Feng, 1984-, et al. (författare)
  • Flexible Modeling of Conditional Distributions using Smooth Mixtures of Asymmetric Student T Densities
  • 2010
  • Ingår i: Journal of Statistical Planning and Inference. - : Elsevier BV. - 0378-3758 .- 1873-1171. ; 140:12, s. 3638-3654
  • Tidskriftsartikel (refereegranskat)abstract
    • A general model is proposed for flexibly estimating the density of a continuous response variableconditional on a possibly high-dimensional set of covariates. The model is a finite mixture ofasymmetric student-t densities with covariate-dependent mixture weights. The four parameters ofthe components, the mean, degrees of freedom, scale and skewness, are all modeled as functionsof the covariates. Inference is Bayesian and the computation is carried out using Markov chainMonte Carlo simulation. To enable model parsimony, a variable selection prior is used in each setof covariates and among the covariates in the mixing weights. The model is used to analyze thedistribution of daily stock market returns, and shown to more accurately forecast the distributionof returns than other widely used models for financial data.
  •  
33.
  • Li, Feng, 1984-, et al. (författare)
  • Modeling Conditional Densities using Finite Smooth Mixtures
  • 2011
  • Ingår i: Mixtures. - Chichester : John Wiley & Sons. - 9781119993896 ; , s. 123-144
  • Bokkapitel (refereegranskat)abstract
    • Smooth mixtures, i.e. mixture models with covariate-dependent mixing weights,are very useful flexible models for conditional densities. Previous work shows that using toosimple mixture components for modeling heteroscedastic and/or heavy tailed data can givea poor fit, even with a large number of components. This paper explores how well a smoothmixture of symmetric components can capture skewed data. Simulations and applications onreal data show that including covariate-dependent skewness in the components can lead tosubstantially improved performance on skewed data, often using a much smaller number ofcomponents. Furthermore, variable selection is effective in removing unnecessary covariates inthe skewness, which means that there is little loss in allowing for skewness in the componentswhen the data are actually symmetric. We also introduce smooth mixtures of gamma andlog-normal components to model positively-valued response variables.
  •  
34.
  • Maghazeh, Arian, et al. (författare)
  • Perception-aware power management for mobile games via dynamic resolution scaling
  • 2015
  • Ingår i: 2015 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD). - : IEEE. - 9781467383882 ; , s. 613-620
  • Konferensbidrag (refereegranskat)abstract
    • Modern mobile devices provide ultra-high resolutions in their display panels. This imposes ever increasing workload on the GPU leading to high power consumption and shortened battery life. In this paper, we first show that resolution scaling leads to significant power savings. Second, we propose a perception-aware adaptive scheme that sets the resolution during game play. We exploit the fact that game players are often willing to trade quality for longer battery life. Our scheme uses decision theory, where the predicted user perception is combined with a novel asymmetric loss function that encodes users' alterations in their willingness to save power.
  •  
35.
  • Magnusson, Måns, 1981-, et al. (författare)
  • DOLDA : a regularized supervised topic model for high-dimensional multi-class regression
  • 2020
  • Ingår i: Computational statistics (Zeitschrift). - : Springer Science and Business Media LLC. - 0943-4062 .- 1613-9658. ; 35:1, s. 175-201
  • Tidskriftsartikel (refereegranskat)abstract
    • Generating user interpretable multi-class predictions in data-rich environments with many classes and explanatory covariates is a daunting task. We introduce Diagonal Orthant Latent Dirichlet Allocation (DOLDA), a supervised topic model for multi-class classification that can handle many classes as well as many covariates. To handle many classes we use the recently proposed Diagonal Orthant probit model (Johndrow et al., in: Proceedings of the sixteenth international conference on artificial intelligence and statistics, 2013) together with an efficient Horseshoe prior for variable selection/shrinkage (Carvalho et al. in Biometrika 97:465-480, 2010). We propose a computationally efficient parallel Gibbs sampler for the new model. An important advantage of DOLDAis that learned topics are directly connected to individual classes without the need for a reference class. We evaluate the model's predictive accuracy and scalability, and demonstrate DOLDA's advantage in interpreting the generated predictions.
  •  
36.
  • Magnusson, Måns, 1981- (författare)
  • Scalable and Efficient Probabilistic Topic Model Inference for Textual Data
  • 2018
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Probabilistic topic models have proven to be an extremely versatile class of mixed-membership models for discovering the thematic structure of text collections. There are many possible applications, covering a broad range of areas of study: technology, natural science, social science and the humanities.In this thesis, a new efficient parallel Markov Chain Monte Carlo inference algorithm is proposed for Bayesian inference in large topic models. The proposed methods scale well with the corpus size and can be used for other probabilistic topic models and other natural language processing applications. The proposed methods are fast, efficient, scalable, and will converge to the true posterior distribution.In addition, in this thesis a supervised topic model for high-dimensional text classification is also proposed, with emphasis on interpretable document prediction using the horseshoe shrinkage prior in supervised topic models.Finally, we develop a model and inference algorithm that can model agenda and framing of political speeches over time with a priori defined topics. We apply the approach to analyze the evolution of immigration discourse in the Swedish parliament by combining theory from political science and communication science with a probabilistic topic model.
  •  
37.
  • Magnusson, Måns, 1981-, et al. (författare)
  • Sparse Partially Collapsed MCMC for Parallel Inference in Topic Models
  • 2018
  • Ingår i: Journal of Computational And Graphical Statistics. - : American Statistical Association. - 1061-8600 .- 1537-2715. ; 27:2, s. 449-463
  • Tidskriftsartikel (refereegranskat)abstract
    • Topic models, and more specifically the class of latent Dirichlet allocation (LDA), are widely used for probabilistic modeling of text. Markov chain Monte Carlo (MCMC) sampling from the posterior distribution is typically performed using a collapsed Gibbs sampler. We propose a parallel sparse partially collapsed Gibbs sampler and compare its speed and efficiency to state-of-the-art samplers for topic models on five well-known text corpora of differing sizes and properties. In particular, we propose and compare two different strategies for sampling the parameter block with latent topic indicators. The experiments show that the increase in statistical inefficiency from only partial collapsing is smaller than commonly assumed, and can be more than compensated by the speedup from parallelization and sparsity on larger corpora. We also prove that the partially collapsed samplers scale well with the size of the corpus. The proposed algorithm is fast, efficient, exact, and can be used in more modeling situations than the ordinary collapsed sampler. Supplementary materials for this article are available online.
  •  
38.
  • Mahfouzi, Rouhollah, et al. (författare)
  • Intrusion-Damage Assessment and Mitigation in Cyber-Physical Systems for Control Applications
  • 2016
  • Ingår i: RTNS '16 Proceedings of the 24th International Conference on Real-Time Networks and Systems. - New York : ACM Press. - 9781450347877 ; , s. 141-150
  • Konferensbidrag (refereegranskat)abstract
    • With cyber-physical systems opening to the outside world, security can no longer be considered a secondary issue. One of the key aspects in security of cyber-phyiscal systems is to deal with intrusions. In this paper, we highlight the several unique properties of control applications in cyber-physical systems. Using these unique properties, we propose a systematic intrusion-damage assessment and mitigation mechanism for the class of observable and controllable attacks.On the one hand, in cyber-physical systems, the plants follow certain laws of physics and this can be utilized to address the intrusion-damage assessment problem. That is, the states of the controlled plant should follow those expected according to the physics of the system and any major discrepancy is potentially an indication of intrusion. Here, we use a machine learning algorithm to capture the normal behavior of the system according to its dynamics. On the other hand, the control performance strongly depends on the amount of allocated resources and this can be used to address the intrusion-damage mitigation problem. That is, the intrusion-damage mitigation is based on the idea of allocating more resources to the control application under attack. This is done using a feedback-based approach including a convex optimization.
  •  
39.
  • Mohammadinodooshan, Alireza, 1983- (författare)
  • Data-driven Contributions to Understanding User Engagement Dynamics on Social Media
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Social media platforms have fundamentally transformed the way information is produced, distributed, and consumed. News digestion and dissemination are not an exception. A recent study by the Pew Research Center highlights that 53% of Twitter (renamed X) users, alongside notable percentages on Facebook (43%), Reddit (38%), and Instagram (34%), rely on these platforms for their daily news. Unfortunately, not all news is reliable and unbiased, which poses a significant societal challenge. Beyond news, content posted by influencers can also play an important role in shaping opinions and behaviors.Indeed, how users engage with different classes of content (including unreliable content) on social media can amplify their visibility and shape public perceptions and debates. Recognizing this, prior research has studied different aspects of user engagement dynamics with varying classes of content. However, several unexplored dimensions remain. To better understand these dynamics, this thesis addresses part of this research gap through eight comprehensive studies across four key dimensions, where we place particular focus on news content.The first dimension of this thesis presents a large-scale analysis of users' interactions with news publishers on Twitter. This analysis provides a fine-grained understanding of engagement patterns with various classes of publishers, with key findings indicating elevated engagement rates among unreliable news publishers. The second dimension examines the dynamics of interaction patterns between public and private (less public) sharing of news articles on Facebook. This dimension highlights deeper user engagement in private contexts compared to the public sphere, with both spheres showing the highest interaction levels with highly unreliable content. The third dimension investigates the drivers of popularity among news tweets to understand what makes some tweets more/less successful in gaining user engagement. For instance, this analysis reveals the negative impact of analytic language on user engagement, with the biggest engagement declines observed among unreliable publishers. Finally, the thesis emphasizes the importance of temporal dynamics in user engagement. For example, exploring the temporal user engagement with different news classes over time, we observe a positive correlation between the reliability of a post and the early interactions it receives on Facebook. While the thesis quantitatively assesses the effects of reliability across all dimensions, it also places additional focus on the role of bias in the observed patterns.These and other insights presented in the thesis offer actionable insights that can benefit multiple stakeholders, providing policymakers and content moderators with a comprehensive perspective for addressing the spread of problematic content. Moreover, platform designers can leverage the insights to build features that promote healthy online communities, while news outlets can use them to tailor content strategies based on target audiences, and individual users can use them to make informed decisions. Although the thesis has inherent limitations, it deepens our current understanding of engagement dynamics to foster a more secure and trustworthy social media experience that remains engaging.
  •  
40.
  • Munezero, Parfait, 1986- (författare)
  • Bayesian Sequential Inference for Dynamic Regression Models
  • 2020
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Many processes evolve over time and statistical models need to be adaptive to change. This thesis proposes flexible models and statistical methods for inference about a data generating process that varies over time. The models considered are quite general dynamic predictive models with parameters linked to a set of covariates via link functions. The dynamics can arise from time-varying regression coefficients and from changes in the link function over time. The covariates can be time-varying and may also have incomplete information.An efficient Bayesian inference methodology is developed for analyzing the posterior of dynamic regression models sequentially, with a particular focus on online learning and real-time prediction. The core inferential algorithm belongs to a family of sequential Monte Carlo methods commonly known as particle filters, and a key contribution is the development of a tailored proposal distribution. The algorithm is shown to outperform a state-of-the-art Markov Chain Monte Carlo method and is also extended to mixture-of-experts models.The performance of the inference methodology is assessed through various simulation experiments and real data from clinical and social-demographic studies, as well as from an industrial software development project.
  •  
41.
  •  
42.
  • Munezero, Parfait, 1986-, et al. (författare)
  • Dynamic Mixture of Experts Models for Online Prediction
  • 2023
  • Ingår i: Technometrics. - : Informa UK Limited. - 0040-1706 .- 1537-2723. ; 65:2, s. 257-268
  • Tidskriftsartikel (refereegranskat)abstract
    • A mixture of experts models the conditional density of a response variable using a mixture of regression models with covariate-dependent mixture weights. We extend the finite mixture of experts model by allowing the parameters in both the mixture components and the weights to evolve in time by following random walk processes. Inference for time-varying parameters in richly parameterized mixture of experts models is challenging. We propose a sequential Monte Carlo algorithm for online inference based on a tailored proposal distribution built on ideas from linear Bayes methods and the EM algorithm. The method gives a unified treatment for mixtures with time-varying parameters, including the special case of static parameters. We assess the properties of the method on simulated data and on industrial data where the aim is to predict software faults in a continuously upgraded large-scale software project. 
  •  
43.
  •  
44.
  • Nalenz, Malte, et al. (författare)
  • TREE ENSEMBLES WITH RULE STRUCTURED HORSESHOE REGULARIZATION
  • 2018
  • Ingår i: Annals of Applied Statistics. - : INST MATHEMATICAL STATISTICS. - 1932-6157 .- 1941-7330. ; 12:4, s. 2379-2408
  • Tidskriftsartikel (refereegranskat)abstract
    • We propose a new Bayesian model for flexible nonlinear regression and classification using tree ensembles. The model is based on the RuleFit approach in Friedman and Popescu [Ann. Appl. Stat. 2 (2008) 916-954] where rules from decision trees and linear terms are used in a Ll -regularized regression. We modify RuleFit by replacing the L1-regularization by a horseshoe prior, which is well known to give aggressive shrinkage of noise predictors while leaving the important signal essentially untouched. This is especially important when a large number of rules are used as predictors as many of them only contribute noise. Our horseshoe prior has an additional hierarchical layer that applies more shrinkage a priori to rules with a large number of splits, and to rules that are only satisfied by a few observations. The aggressive noise shrinkage of our prior also makes it possible to complement the rules from boosting in RuleFit with an additional set of trees from Random Forest, which brings a desirable diversity to the ensemble. We sample from the posterior distribution using a very efficient and easily implemented Gibbs sampler. The new model is shown to outperform state-of-the-art methods like RuleFit, BART and Random Forest on 16 datasets. The model and its interpretation is demonstrated on the well known Boston housing data, and on gene expression data for cancer classification. The posterior sampling, prediction and graphical tools for interpreting the model results are implemented in a publicly available R package.
  •  
45.
  • Nott, David J., et al. (författare)
  • Regression density estimation with variational methods and stochastic approximation
  • 2012
  • Ingår i: Journal of Computational And Graphical Statistics. - : Taylor & Francis. - 1061-8600 .- 1537-2715. ; 21:3, s. 797-820
  • Tidskriftsartikel (refereegranskat)abstract
    • Regression density estimation is the problem of flexibly estimating a response distribution as a function of covariates. An important approach to regression density estimation uses finite mixture models and our article considers flexible mixtures of heteroscedastic regression (MHR) models where the response distribution is a normal mixture, with the component means, variances and mixture weights all varying as a function of covariates. Our article develops fast variational approximation methods for inference. Our motivation is that alternative computationally intensive MCMC methods for fitting mixture models are difficult to apply when it is desired to fit models repeatedly in exploratory analysis and model choice. Our article makes three contributions. First, a variational approximation for MHR models is described where the variational lower bound is in closed form. Second, the basic approximation can be improved by using stochastic approximation methods to perturb the initial solution to attain higher accuracy. Third, the advantages of our approach for model choice and evaluation compared to MCMC based approaches are illustrated. These advantages are particularly compelling for time series data where repeated refitting for one step ahead prediction in model choice and diagnostics and in rolling window computations is very common. Supplemental materials for the article are available online.
  •  
46.
  •  
47.
  • Oelrich, Oscar, 1986- (författare)
  • Learning local predictive accuracy for expert evaluation and forecast combination
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis consists of four papers that study several topics related to expert evaluation and aggregation. Paper I explores the properties of Bayes factors. Bayes factors, which are used for Bayesian hypothesis testing as well as to aggregate models using Bayesian model averaging, are sometimes observed to behave erratically. We analyze some of the sources of this erratic behavior, which we call overconfidence, by deriving the sampling distribution of Bayes factors for a class of linear model. We show that overconfidence is most likely to occur when comparing models that are complex and approximate the data-generating process in widely different ways.  Paper II proposes a general framework for creating linear aggregate density forecasts based on local predictive ability, where we define local predictive ability to be the conditional expected log  predictive density given an arbitrary set of pooling variables. We call the space spanned by the variables in this set the pooling space and propose the caliper method as a way to estimate  local predictive ability. We further introduce a local version of linear optimal pools that works by optimizing the historic performance of a linear pool only for past observations that were made at  points in the pooling space close to the new point at which we want to make a prediction. Both methods are illustrated in two applications: macroeconomic forecasting predictions of bike sharing usage in Washington D.C.Paper III builds on Paper II by introducing a Gaussian process (GP) as a model for estimating local predictive ability. When the predictive distribution of an expert, as well as the data-generating process, is normal, it follows that the distribution of the log scores will follow a scaled and translated noncentral chi-squared distribution with one degree of freedom. We show that,  following a power-transform of the log scores, they can be modeled using a Gaussian process  with Gaussian noise. The proposed model has the advantage that the latent Gaussian process surface can be marginalized out in order to quickly obtain the marginal posteriors of the hyperparameters of the GP, which is important since the computational cost of the unmarginalized model is often prohibitive. The paper demonstrates the GP approach to modeling local predictive ability with a simulation study and an application using the bike sharing data from Paper II, and develops new methods for pooling predictive distributions conditional on full posterior distributions of local predictive ability.  Paper IV further expands on Paper III by considering the problem of estimating local predictive ability for a set of experts jointly using a multi-output Gaussian process. In Paper III, the posterior distribution of the local predictive ability of each expert is obtained separately. By instead estimating a joint posterior, we can exploit dependencies in the correlation between the predictive ability of the experts to create better aggregate predictions. We can also use this joint posterior for inference, for example to learn about the relationships between the different experts. The method is illustrated using a simulation study and the same bike sharing data as in Paper III.
  •  
48.
  • Oelrich, Oscar, 1986-, et al. (författare)
  • Local prediction pools
  • Annan publikation (övrigt vetenskapligt/konstnärligt)
  •  
49.
  • Oelrich, Oscar, et al. (författare)
  • Local prediction pools
  • 2024
  • Ingår i: Journal of Forecasting. - 0277-6693 .- 1099-131X. ; , s. 103-117
  • Tidskriftsartikel (refereegranskat)abstract
    • We propose local prediction pools as a method for combining the predictive distributions of a set of experts conditional on a set of variables believed to be related to the predictive accuracy of the experts. This is done in a two-step process where we first estimate the conditional predictive accuracy of each expert given a vector of covariates-or pooling variables-and then combine the predictive distributions of the experts conditional on this local predictive accuracy. To estimate the local predictive accuracy of each expert, we introduce the simple, fast, and interpretable caliper method. Expert pooling weights from the local prediction pool approaches the equal weight solution whenever there is little data on local predictive performance, making the pools robust and adaptive. We also propose a local version of the widely used optimal prediction pools. Local prediction pools are shown to outperform the widely used optimal linear pools in a macroeconomic forecasting evaluation and in predicting daily bike usage for a bike rental company.
  •  
50.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 89
Typ av publikation
tidskriftsartikel (43)
doktorsavhandling (15)
konferensbidrag (12)
annan publikation (11)
rapport (3)
licentiatavhandling (2)
visa fler...
samlingsverk (redaktörskap) (1)
forskningsöversikt (1)
bokkapitel (1)
visa färre...
Typ av innehåll
refereegranskat (56)
övrigt vetenskapligt/konstnärligt (32)
populärvet., debatt m.m. (1)
Författare/redaktör
Aad, G (4)
Abbott, B. (4)
Abdallah, J (4)
Abdinov, O (4)
Lund-Jensen, Bengt (4)
Strandberg, Jonas (4)
visa fler...
Brenner, Richard (4)
Ekelöf, Tord (4)
Ellert, Mattias (4)
Ferrari, Arnaud (4)
Abi, B. (4)
Abramowicz, H. (4)
Abreu, H. (4)
Adams, D. L. (4)
Adelman, J. (4)
Adye, T. (4)
Aielli, G. (4)
Akimoto, G. (4)
Akimov, A. V. (4)
Albrand, S. (4)
Aleksa, M. (4)
Aleksandrov, I. N. (4)
Alexander, G. (4)
Alexandre, G. (4)
Alexopoulos, T. (4)
Alhroob, M. (4)
Alimonti, G. (4)
Alison, J. (4)
Allport, P. P. (4)
Almond, J. (4)
Aloisio, A. (4)
Alviggi, M. G. (4)
Amako, K. (4)
Amelung, C. (4)
Amorim, A. (4)
Amram, N. (4)
Anastopoulos, C. (4)
Andeen, T. (4)
Anderson, K. J. (4)
Andreazza, A. (4)
Andrei, V. (4)
Angerami, A. (4)
Anghinolfi, F. (4)
Anjos, N. (4)
Antonaki, A. (4)
Antonelli, M. (4)
Anulli, F. (4)
Arabidze, G. (4)
Aracena, I. (4)
Arai, Y. (4)
visa färre...
Lärosäte
Stockholms universitet (47)
Linköpings universitet (45)
Kungliga Tekniska Högskolan (7)
Uppsala universitet (7)
Lunds universitet (4)
Chalmers tekniska högskola (3)
visa fler...
Göteborgs universitet (2)
Umeå universitet (2)
RISE (2)
Örebro universitet (1)
visa färre...
Språk
Engelska (88)
Svenska (1)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (71)
Teknik (13)
Samhällsvetenskap (9)
Medicin och hälsovetenskap (3)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy