SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Nordgaard Anders 1962 ) "

Sökning: WFRF:(Nordgaard Anders 1962 )

  • Resultat 1-43 av 43
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  • Lindgren, Petter, et al. (författare)
  • A likelihood ratio-based approach for improved source attribution in microbiological forensic investigations
  • 2019
  • Ingår i: Forensic Science International. - : Elsevier. - 0379-0738 .- 1872-6283. ; 302
  • Tidskriftsartikel (refereegranskat)abstract
    • A common objective in microbial forensic investigations is to identify the origin of a recovered pathogenic bacterium by DNA sequencing. However, there is currently no consensus about how degrees of belief in such origin hypotheses should be quantified, interpreted, and communicated to wider audiences. To fill this gap, we have developed a concept based on calculating probabilistic evidential values for microbial forensic hypotheses. The likelihood-ratio method underpinning this concept is widely used in other forensic fields, such as human DNA matching, where results are readily interpretable and have been successfully communicated in juridical hearings. The concept was applied to two case scenarios of interest in microbial forensics: (1) identifying source cultures among series of very similar cultures generated by parallel serial passage of the Tier 1 pathogen Francisella tularensis, and (2) finding the production facilities of strains isolated in a real disease outbreak caused by the human pathogen Listeria monocytogenes. Evidence values for the studied hypotheses were computed based on signatures derived from whole genome sequencing data, including deep-sequenced low-frequency variants and structural variants such as duplications and deletions acquired during serial passages. In the F. tularensis case study, we were able to correctly assign fictive evidence samples to the correct culture batches of origin on the basis of structural variant data. By setting up relevant hypotheses and using data on cultivated batch sources to define the reference populations under each hypothesis, evidential values could be calculated. The results show that extremely similar strains can be separated on the basis of amplified mutational patterns identified by high-throughput sequencing. In the L. monocytogenes scenario, analyses of whole genome sequence data conclusively assigned the clinical samples to specific sources of origin, and conclusions were formulated to facilitate communication of the findings. Taken together, these findings demonstrate the potential of using bacterial whole genome sequencing data, including data on both low frequency SNP signatures and structural variants, to calculate evidence values that facilitate interpretation and communication of the results. The concept could be applied in diverse scenarios, including both epidemiological and forensic source tracking of bacterial infectious disease outbreaks. 
  •  
3.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • A resampling technique for estimating the power of non-parametric trend tests
  • 2006
  • Ingår i: Environmetrics. - : Wiley. - 1180-4009 .- 1099-095X. ; 17:3, s. 257-267
  • Tidskriftsartikel (refereegranskat)abstract
    • The power of Mann-Kendall tests and other non-parametric trend tests is normally estimated by performing Monte Carlo simulations in which artificial data are generated according to simple parametric models. Here we introduce a resampling technique for power assessments that can be fully automated and accommodate almost any variation in the collected time series data. A rank regression model is employed to extract error terms representing irregular variation in data that are collected over several seasons and may contain a non-linear trend. Thereafter, an autoregressive moving average (ARMA) bootstrap method is used to generate new time series of error terms for power simulations. A study of water quality data from two Swedish rivers illustrates how our method can provide site- and variable-specific information about the power of the Hirsch and Slack test for monotonic trends. In particular, we show how to clarify the impact of sampling frequency on the power of the trend tests. Copyright (c) 2006 John Wiley & Sons, Ltd.
  •  
4.
  •  
5.
  • Ahlinder, Jon, et al. (författare)
  • Chemometrics comes to court: evidence evaluation of chem–bio threat agent attacks
  • 2015
  • Ingår i: Journal of Chemometrics. - : John Wiley & Sons. - 0886-9383 .- 1099-128X. ; 29:5, s. 267-276
  • Tidskriftsartikel (refereegranskat)abstract
    • Forensic statistics is a well-established scientific field whose purpose is to statistically analyze evidence in order to support legal decisions. It traditionally relies on methods that assume small numbers of independent variables and multiple samples. Unfortunately, such methods are less applicable when dealing with highly correlated multivariate data sets such as those generated by emerging high throughput analytical technologies. Chemometrics is a field that has a wealth of methods for the analysis of such complex data sets, so it would be desirable to combine the two fields in order to identify best practices for forensic statistics in the future. This paper provides a brief introduction to forensic statistics and describes how chemometrics could be integrated with its established methods to improve the evaluation of evidence in court.The paper describes how statistics and chemometrics can be integrated, by analyzing a previous know forensic data set composed of bacterial communities from fingerprints. The presented strategy can be applied in cases where chemical and biological threat agents have been illegally disposed.
  •  
6.
  •  
7.
  • Ansell, Ricky, et al. (författare)
  • Interpretation of DNA Evidence: Implications of Thresholds Used in the Forensic Laboratory
  • 2014
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Evaluation of forensic evidence is a process lined with decisions and balancing, not infrequently with a substantial deal of subjectivity. Already at the crime scene a lot of decisions have to be made about search strategies, the amount of evidence and traces recovered, later prioritised and sent further to the forensic laboratory etc. Within the laboratory there must be several criteria (often in terms of numbers) on how much and what parts of the material should be analysed. In addition there is often a restricted timeframe for delivery of a statement to the commissioner, which in reality might influence on the work done. The path of DNA evidence from the recovery of a trace at the crime scene to the interpretation and evaluation made in court involves several decisions based on cut-offs of different kinds. These include quality assurance thresholds like limits of detection and quantitation, but also less strictly defined thresholds like upper limits on prevalence of alleles not observed in DNA databases. In a verbal scale of conclusions there are lower limits on likelihood ratios for DNA evidence above which the evidence can be said to strongly support, very strongly support, etc. a proposition about the source of the evidence. Such thresholds may be arbitrarily chosen or based on logical reasoning with probabilities. However, likelihood ratios for DNA evidence depend strongly on the population of potential donors, and this may not be understood among the end-users of such a verbal scale. Even apparently strong DNA evidence against a suspect may be reported on each side of a threshold in the scale depending on whether a close relative is part of the donor population or not. In this presentation we review the use of thresholds and cut-offs in DNA analysis and interpretation and investigate the sensitivity of the final evaluation to how such rules are defined. In particular we show what are the effects of cut-offs when multiple propositions about alternative sources of a trace cannot be avoided, e.g. when there are close relatives to the suspect with high propensities to have left the trace. Moreover, we discuss the possibility of including costs (in terms of time or money) for a decision-theoretic approach in which expected values of information could be analysed.
  •  
8.
  • Bovens, Michael, et al. (författare)
  • Chemometrics in forensic chemistry — Part I: Implications to the forensic workflow
  • 2019
  • Ingår i: Forensic Science International. - : Elsevier BV. - 0379-0738 .- 1872-6283. ; 301, s. 82-90
  • Tidskriftsartikel (refereegranskat)abstract
    • The forensic literature shows a clear trend towards increasing use of chemometrics (i.e. multivariate analysis and other statistical methods). This can be seen in different disciplines such as drug profiling, arson debris analysis, spectral imaging, glass analysis, age determination, and more. In particular, current chemometric applications cover low-dimensional (e.g. drug impurity profiles) and high-dimensional data (e.g. Infrared and Raman spectra) and are therefore useful in many forensic disciplines. There is a dominant and increasing need in forensic chemistry for reliable and structured processing and interpretation of analytical data. This is especially true when classification (grouping) or profiling (batch comparison) is of interest.Chemometrics can provide additional information in complex crime cases and enhance productivity by improving the processes of data handling and interpretation in various applications. However, the use of chemometrics in everyday work tasks is often considered demanding by forensic scientists and, consequently, they are only reluctantly used. This article and following planned contributions are dedicated to those forensic chemists, interested in applying chemometrics but for any reasons are limited in the proper application of statistical tools — usually made for professionals — or the direct support of statisticians. Without claiming to be comprehensive, the literature reviewed revealed a sufficient overview towards the preferably used data handling and chemometric methods used to answer the forensic question. With this basis, a software tool will be designed (part of the EU project STEFA-G02) and handed out to forensic chemist with all necessary elements of data handling and evaluation.Because practical casework is less and less accompanied from the beginning to the end out of the same hand, more and more interfaces are built in through specialization of individuals. This article presents key influencing elements in the forensic workflow related to the most meaningful chemometric application and evaluation.
  •  
9.
  • Hedman, Johannes, et al. (författare)
  • A ranking index for quality assessment of forensic DNA profiles
  • 2010
  • Ingår i: BMC Research Notes. - : BioMed Central Ltd. - 1756-0500. ; 3:290
  • Tidskriftsartikel (refereegranskat)abstract
    • BackgroundAssessment of DNA profile quality is vital in forensic DNA analysis, both in order to determine the evidentiary value of DNA results and to compare the performance of different DNA analysis protocols. Generally the quality assessment is performed through manual examination of the DNA profiles based on empirical knowledge, or by comparing the intensities (allelic peak heights) of the capillary electrophoresis electropherograms.ResultsWe recently developed a ranking index for unbiased and quantitative quality assessment of forensic DNA profiles, the forensic DNA profile index (FI) (Hedman et al. Improved forensic DNA analysis through the use of alternative DNA polymerases and statistical modeling of DNA profiles, Biotechniques 47 (2009) 951-958). FI uses electropherogram data to combine the intensities of the allelic peaks with the balances within and between loci, using Principal Components Analysis. Here we present the construction of FI. We explain the mathematical and statistical methodologies used and present details about the applied data reduction method. Thereby we show how to adapt the ranking index for any Short Tandem Repeat-based forensic DNA typing system through validation against a manual grading scale and calibration against a specific set of DNA profiles.ConclusionsThe developed tool provides unbiased quality assessment of forensic DNA profiles. It can be applied for any DNA profiling system based on Short Tandem Repeat markers. Apart from crime related DNA analysis, FI can therefore be used as a quality tool in paternal or familial testing as well as in disaster victim identification.
  •  
10.
  • Kadane, Joseph B., et al. (författare)
  • Using Bayes factors to limit forensic testimony to forensics: composite hypotheses
  • 2024
  • Ingår i: Australian journal of forensic sciences. - : TAYLOR & FRANCIS LTD. - 0045-0618 .- 1834-562X.
  • Tidskriftsartikel (refereegranskat)abstract
    • In most western legal systems, only the fact-finder (judge or jury) is entrusted to make the ultimate decision in a criminal case. A forensic expert can help the fact-finder by opining on the weight of the forensic evidence given the hypotheses relevant to the case, but is not qualified to give an opinion about the ultimate question(s). When the question is reduced to two simple hypotheses, a Bayes Factor can express the expert's opinion about the extent to which the forensic evidence favours each hypothesis. This paper addresses the situation in which one or both of the hypotheses are composite, that is, embrace more than one possibility. It offers an interval of Bayes Factors, and shows that the proposed interval includes those values, and only those values, of the Bayes Factor supported by possible beliefs of the fact-finder. Shoe prints, tool marks and DNA are discussed in this light if the hypotheses used in the Bayes Factor are composite.
  •  
11.
  • Libiseller, Claudia, 1975-, et al. (författare)
  • Variance reduction for trend analysis of hydrochemical data from brackish waters
  • 2003
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • We propose one parametric and one non-parametric method for detection of monotone trends in nutrient concentrations in brackish waters. Both methods take into account that temporal variation in the quality of such waters can be strongly influenced by mixing of salt and fresh water, thus salinity is used as a classification variable in the trend analysis. With the non-parametric approach, Mann-Kendall statistics are calculated for each salinity level, and the parametric method involves the use of bootstrap estimates of the trend slope in a time series regression model. In both cases, tests for each salinity level are combined in an overall trend test.
  •  
12.
  •  
13.
  • Nordgaard, Anders, 1962- (författare)
  • A resampling technique for estimating the power of non-parametric trend tests
  • 2004
  • Ingår i: COMPSTAT 2004, Prague, Czech Republic.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • The power of Mann-Kendall tests and other non-parametric trend tests is normally estimated by performing Monte-Carlo simulations in which artificial data are generated according to simple parametric models. Here we introduce a resampling technique for power assessments that can be fully automated and accommodate almost any variation incollected time series data. A rank regression model is employed to extract error terms representing irregular variation in data that have been gathered over several seasons and may contain a non-linear trend. Thereafter, an autoregressive bootstrap method is used to generate new time series of error terms for power simulations. These innovations are combined with trend and seasonal components from the fitted rank regression model, and the trend function can be resampled. We also describe a study of water quality data from two Swedish rivers to illustrate how our method can provide site- and variable-specific information about the power of the Hirsch and Slack test for monotonic trends. In particular, we show how our technique can clarify the impact of sampling frequency on the power of this type of trend test.
  •  
14.
  •  
15.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • Assessment of Approximate Likelihood Ratios from Continuous Distributions: A Case Study of Digital Camera Identification
  • 2011
  • Ingår i: Journal of Forensic Sciences. - : Wiley. - 0022-1198 .- 1556-4029. ; 56:2, s. 390-402
  • Tidskriftsartikel (refereegranskat)abstract
    • A reported likelihood ratio for the value of evidence is very often a point estimate based on various types of reference data. When presented in court, such frequentist likelihood ratio gets a higher scientific value if it is accompanied by an error bound. This becomes particularly important when the magnitude of the likelihood ratio is modest and thus is giving less support for the forwarded proposition. Here, we investigate methods for error bound estimation for the specific case of digital camera identification. The underlying probability distributions are continuous and previously proposed models for those are used, but the derived methodology is otherwise general. Both asymptotic and resampling distributions are applied in combination with different types of point estimators. The results show that resampling is preferable for assessment based on asymptotic distributions. Further, assessment of parametric estimators is superior to evaluation of kernel estimators when background data are limited.
  •  
16.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • Assessment of forensic findings when alternative explanations have different likelihoods—“Blame-the-brother”-syndrome
  • 2012
  • Ingår i: Science & justice. - : Elsevier. - 1355-0306 .- 1876-4452. ; 52:4, s. 226-236
  • Tidskriftsartikel (refereegranskat)abstract
    • Assessment of forensic findings with likelihood ratios is for several cases straightforward, but there are a number of situations where contemplation of the alternative explanation to the evidence needs consideration, in particular when it comes to the reporting of the evidentiary strength. The likelihood ratio approach cannot be directly applied to cases where the proposition alternative to the forwarded one is a set of multiple propositions with different likelihoods and different prior probabilities. Here we present a general framework based on the Bayes' factor as the quantitative measure of evidentiary strength from which it can be deduced whether the direct application of a likelihood ratio is reasonable or not. The framework is applied on DNA evidence in forms of an extension to previously published work. With the help of a scale of conclusions we provide a solution to the problem of communicating to the court the evidentiary strength of a DNA match when a close relative to the suspect has a non-negligible prior probability of being the source of the DNA.
  •  
17.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • ’Blame the brother’-Assessment of forensic DNA evidence when alternative explanations have different likelihoods
  • 2011
  • Ingår i: Book of Abstracts. ; , s. 196-
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • In a crime case where a suspect is assumed to be the donor of a recovered stain, forensic DNA evidence presented in terms of a likelihood ratio is a clear course as long as the set of alternative donors contains no close relative of the suspect, since the latter has a higher likelihood than has an individual unrelated to the suspect. The state-of-art today at several laboratories is to report the likelihood ratio but with a reservation stating its lack of validity if the stain originates from a close relative. Buckleton et al[†] derived a so-called extended likelihood ratio for reporting DNA evidence values when a full sibling is present in the set of potential alternative donors. This approach requires consideration of prior probabilities for each of the alternative donors to be the source of the stain and may therefore be problematic to apply in practice. Here we present an alternative way of using prior probabilities in the extended likelihood ratio when the latter is reported on an ordinal scale of conclusions. Our example show that for a 12 STR-marker profile using the extended likelihood ratio approach would not imply a change in the level reported compared to the ordinary likelihood ratio approach, unless the close relative has a very high prior probability of being the donor compared to an unrelated individual.[†] Buckleton JS, Triggs CM, Champod C., Science & Justice 46: 69-78. 
  •  
18.
  • Nordgaard, Anders, 1962- (författare)
  • Challenges in forensic evidence evaluation
  • 2012
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Interpretation andevaluation of forensic evidence is in essence a matter of probabilistic reasoning.The absence of models and sufficient background databases designed specificallyfor each particular forensic case make it a challenge to pursue such reasoning.However, with a coherent framework it is possible to reason with subjectiveprobabilities (subjective in the sense that they depend on the expert’sexperience and general knowledge) without leaving the court with a statementthat is merely the expert’s personal opinion. Bayesian reasoning, through theuse of Bayes factors (or very often likelihood ratios) constitutes such aframework. Here we present how the use of an ordinal scale for the Bayes factorcan allay the fear of subjectivity, and also how it can ease the problem ofevaluating evidence when there are multiple explanations for the forensicfindings with different likelihoods.
  •  
19.
  •  
20.
  • Nordgaard, Anders, 1962- (författare)
  • Classification of percentages in seizures of narcotic material
  • 2017
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • The percentage of the narcotic substance in a drug seizure may vary a lot depending on when and from whom the seizure was taken. Seizures from a typical consumer would in general show low percentages, while seizures from the early stages of a drug dealing chain would show higher percentages (these will be diluted). Legal fact finders must have an up-to-date picture of what is an expected level of the percentage and what levels are to be treated as unusually low or unusually high. This is important for the determination of the sentences to be given in a drug case.In this work we treat the probability distribution of the percentage of a narcotic substance in a seizure from year to year as a time series of beta density functions, which are successively updated with the use of point mass posteriors for the shape parameters. The predictive distribution for a new year is a weighted sum of beta distributions for the previous years where the weights are found from forward validation. We show that this method of prediction is more accurate than one that uses a predictive distribution built on a likelihood based on all previous years.
  •  
21.
  •  
22.
  • Nordgaard, Anders, 1962- (författare)
  • Computer-intensive methods for dependent observations
  • 1996
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This dissertation deals with computer-intensive methods for dependent observations. The main part is built up by four papers defining and analyzing a resampling method of bootstrap type for the spectral domain of a stationary Gaussian sequence. The emphasis is on practical aspects as well as on asymptotic validity. The other part develops comprehensive models for statistical extrapolation of spatially collected data. The emphasis is on practical implementation and efficient model selection.The resampling method uses known asymptotic results for the spectral parts of a sample from a stationary sequence. The resampling is done completely in the spectral domain of the sequence and has separate procedures for amplitude and phase resampling. The latter property is a new concept. Some different strategies for the two parts of the resampling are suggested, including previously suggested amplitude resampling methods. As for the phase resampling, the methods are unique for the works included in this dissertation.The performance of the method is analyzed partly by comprehensive simulation studies, partly by studying asymptotic distributions of certain estimators. The simulation results are satisfactory and the asymptotic validity is achieved. Some open questions are discussed. The development of models for extrapolation starts from different assumptions on data. The most successful modelling is by treating the data as coming from a spatial stochastic process. A parametric correlation-structure is thus applied, resulting in heavy numerical estimation routines. Another part of the modelling is the assumption of a non-linear meanvalue function of data, preferably estimated by cubic spline regression functions. To choose between the different models a comprehensive cross-validation procedure is implemented.
  •  
23.
  •  
24.
  •  
25.
  •  
26.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • Indirect Evaluation by Simulation of a Bayesian Network
  • 2014
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Evidence evaluation when addressing source level propositions is usually done by comparing a piece of recovered material from (specimens of) control material. When the control material source is not available for taking specimens or for investigating it in its entirety, we must stick to photographs or video take-ups for making comparisons. An example is the comparison of class characteristics between a recovered footwear print and a picture of a seized shoe, where the evaluation is occasionally made that way. However, this way of pursuing the investigation is due to needs of quick answers, when there is no or little time to send in the entire footwear for the comparison. Moreover, the pictures taken of the sole of the seized footwear are taken by the police under controlled conditions and with high quality equipment.When the suspected source is captured on a lower quality video take-up and the recovered material consists of fragments from the original body of material – for instance fire debris – the comparison with the control material source is naturally more difficult. In this paper we present a case where the question is whether recovered fire debris originate from a piece of garment captured on a CCTV take-up. We show how a likelihood ratio for the two propositions can be indirectly obtained from a classification of the source of the fire debris, by using a Bayesian network model. Results from fire debris analysis as well as the results from image comparisons can be evaluated against propositions of class and the updating of the class node for fire debris propagates back to the propositions for source.Feeding the network with uniform priors for the class nodes we show how simulation can be used to obtain the correct level of the likelihood ratio for further reporting.
  •  
27.
  •  
28.
  •  
29.
  •  
30.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • Prediction of the distribution of thepercentages of narcotic substances in drug seizures
  • 2016
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • The percentage of the narcotic substance in a drug seizure may vary a lot depending on when and from whom the seizure was taken. Seizures from a typical consumer would in general show low percentages, while seizures from the early stages of a drug dealing chain would show higher percentages (these will be diluted). Historical records from the determination of the percentage of narcotic substance in seized drugs reveal that the mean percentage but also the variation of the percentage can differ between years. Some drugs show close to monotonic trends while others are more irregular in the temporal variation.Legal fact finders must have an up-to-date picture of what is an expected level of the percentage and what levels are to be treated as unusually low or unusually high. This is important for the determination of the sentences to be given in a drug case.In this work we treat the probability distribution of the percentage of a narcotic substance in a seizure from year to year as a time series of functions. The functions are probability density functions of beta distributions, which are successively updated with the use of point mass posteriors for the shape parameters. The predictive distribution for a new year is a weighted sum of beta distributions for the previous years where the weights are found from forward validation. We show that this method of prediction is more accurate than one that uses a predictive distribution built on a likelihood based on all previous years.
  •  
31.
  •  
32.
  • Nordgaard, Anders, 1962- (författare)
  • Quantifying experience in sample size determination for drug analysis
  • 2006
  • Ingår i: Law, Probability and Risk. - 1470-8396 .- 1470-840X. ; 4:4, s. 217-225
  • Tidskriftsartikel (refereegranskat)abstract
    •  Forensic analysis of pills suspected to contain illegal drugs is a time-consuming process; therefore, only a small sample from a seizure can be investigated. Notwithstanding, for drugs like Ecstacy, experience of forensic analysts indicates that a seizure of tablets usually consists either wholly of illicit drugs or no illegal substances at all. Consequently, it should be possible to draw fairly accurate conclusions based on very small samples, if all pills in a sample are indeed analytically identical. The forensic experience is modelled using a beta prior distribution for the proportion of drug-containing tablets in a seizure, and the sample size is determined so that a certain confidence statement can be made about this proportion. The parameters of the beta prior must be set to correspond with the experience, and this paper suggests a method for estimating these parameter values from a database comprising analyst reports representing the experience. The technique is applied to proportions of Ecstacy pills, and the results show that a sample of five pills is enough to state with a high level of confidence that at least half the tablets in a presumed Ecstacy seizure are genuine.
  •  
33.
  • Nordgaard, Anders, 1962- (författare)
  • Resampling species-wise abundance data
  • 2006
  • Ingår i: The 17th Annual Conference of The International Environmetrics Society (TIES), Kalmar, Sweden.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    •  Monitoring the abundance of plant species in grasslands is time-consuming. Accordingly, sampling or inspection is usually sparse both in time and space. Typically, a grassland area is visited 1-2 times per decade, and each time 5-20 plots are inspected. For each plot (about one square meter) an inspection protocol, containing coverage data for up to 100 species, is established. The collected data can thus be characterized as high-dimensional and sparse. Moreover, it is not unusual that some of the monitored species are present in only a few of the investigated plots, i.e., the vectors of coverage data may contain numerous zeroes. The analysis of abundance data can be either multivariate or univariate. Canonical correlation analysis (CCA) and redundancy analysis (RDA) are widely used multivariate methods. Univariate analyses are usually applied to summary statistics, such as diversity indices or measures of evenness. In either case, the complexity of the data makes it difficult to use parametric models for inference about the whole grassland, and modest sample sizes prevents using asymptotic results. Due to this, nonparametric methods, such as permutation tests, are often used to assess trends in abundance data. However, the power of these tests may be low due to the small number of sampling occasions. Here, we propose a resampling technique that can be used to determine the distribution of arbitrary estimators or test statistics based on high-dimensional abundance data. The original idea of the bootstrap is to substitute the true (but unknown) cumulative distribution function (cdf) for an empirical cumulative distribution function (edf) calculated from a sample of observations. When the collected data can be regarded as a simple random sample, the bootstrap principle provides a convenient method to determine the distributions of a large number of moment-related statistics (e.g. Singh, 1981). Also, it has been demonstrated that regression or time series data can be resampled by first extracting residuals (or innovations) and then forming pseudo data by resampling these residuals (Wu, 1989; Kreiss & Franke, 1992). We propose high-dimensional abundance data be resampled by extracting residuals from a principal components factor analysis in which a small number of factors are retained. Furthermore, we handle point masses at zero (absent species) by using a truncated probit function to transform the original data prior to the principal components factor analysis, and to back-transform the pseudo data. The threshold and the number of factors retained are determined in such a way that the most important features of the resampled data are similar to those of the original observations in the most important resoe. In particular, the number of observed species should not differ too much. The latter is achieved by using a subsampling procedure, in which the number of zeros (i.e. non-observed species) in a subsample and in pseudo-data from that subsample are compared. Also, relative biases and coverage degrees of empirical confidence intervals are optimized. The performance of our procedure is illustrated by extensive simulations and a case study of temporal changes in Shannon entropy in a grassland in South West Sweden.
  •  
34.
  •  
35.
  • Nordgaard, Anders, 1962- (författare)
  • Sample-size Determination for Analysis of Ecstasy Pills
  • 2005
  • Ingår i: The Sixth International Conference on Forensic Statistics, Tempe, AZ, USA.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • By experience, a seizure of pills that is under suspicion to contain drugs will very likely either consist entirely of drug pills of the same kind or consist of pills with no drug-content at all. If this experience could be quantified, it is possible to reduce the number of pills that must be selected for analysis. Recent results show that a Bayesian approach to sample size determination expresses the problem in a more natural way from a forensic point of view, provided an informative prior can be defined. Also, the sample sizes can be further reduced in this framework compared with the more classical Hypergeometric approach. These results have been adopted by the European Network of Forensic Sciences Institutes (ENFSI) in the Guideline on Representative Drug Sampling, published by ENFSI Drug Working Group. In this text, as well as in other published results, it is suggested to use a beta-prior which should be highly left-skewed when assumptions of the above kind can be done. In this paper we show how a beta-prior can be calculated from a data-base of analysed Ecstasy pills. By dividing the data-base items into different sub-populations, it is possible to estimate the parameter in the prior that controls the left-skewness of the distribution.
  •  
36.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • Sampling strategies
  • 2018. - 1
  • Ingår i: Integrated Analytical Approaches for Pesticide Management. - : Elsevier. - 9780128161555 - 9780128161562 ; , s. 31-46
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • The first step of a sampling strategy should be to clearly define its purpose. The aim typically is to obtain information for decision making. The decision rules typically involve estimates of the characteristics of the population (often the mean and standard deviation). The decision rules also require a definition of the population to be sampled. There are many techniques available for obtaining estimates of key characteristics of the population. The definition of population is critical but often nontrivial. There are many different sampling strategies available. The theoretically simplest scheme is simple random sampling and that can be used to provide estimates of the mean and standard deviation. More precise estimates can be obtained using stratified sampling. Cluster sampling is useful when there is significant travel time between the sampling units—however, except in simple cases, estimation of the mean and standard error requires expert input. Systematic sampling is simple to apply and gives precise estimates of the mean. Good estimates of the standard error are not available, but there are useful approximations.
  •  
37.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • Scale of conclusions for the value of evidence
  • 2012
  • Ingår i: Law, Probability and Risk. - Oxford : Oxford University Press. - 1470-8396 .- 1470-840X. ; 11:1, s. 1-24
  • Tidskriftsartikel (refereegranskat)abstract
    • Scales of conclusion in forensic interpretation play an important role in the interface between scientific work at a forensic laboratory and different bodies of the jurisdictional system of a country. Of particular importance is the use of a unified scale that allows interpretation of different kinds of evidence in one common framework. The logical approach to forensic interpretation comprises the use of the likelihood ratio as a measure of evidentiary strength. While fully understood by forensic scientists, the likelihood ratio may be hard to interpret for a person not trained in natural sciences or mathematics. Translation of likelihood ratios to an ordinal scale including verbal counterparts of the levels is therefore a necessary procedure for communicating evidence values to the police and in the courtroom. In this paper, we present a method to develop an ordinal scale for the value of evidence that can be applied to any type of forensic findings. The method is built on probabilistic reasoning about the interpretation of findings and the number of scale levels chosen is a compromise between a pragmatic limit and mathematically well-defined distances between levels. The application of the unified scale is illustrated by a number of case studies.
  •  
38.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • The forensic DNA profile index - a tool for comparison of electropherograms
  • 2010
  • Ingår i: English Speaking Working Group (ESWG), International Society of Forensic Genetics (ISFG) Stockholm, Sweden.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Background Assessment of DNA profile quality is vital in forensic DNA analysis, both in order to determine the evidentiary value of DNA results and to compare the performances of different DNA analysis protocols. Generally the quality assessment is performed through manual examination of the DNA profiles based on empirical knowledge, or by comparing the intensities (allelic peak heights) of the capillary electrophoresis electropherograms. We recently1 developed a ranking index for unbiased and quantitative quality assessment of forensic DNA profiles, the forensic DNA profile index (FI).   Core of presentation FI uses electropherogram data to combine the intensities of the allelic peaks with the balances within and between loci, using Principal Components Analysis. Here we present the construction of FI. We explain the mathematical and statistical methodologies used and present details about the applied data reduction method. Thereby we show how to adapt the ranking index for any STR-based forensic DNA typing system through validation against a manual grading scale and calibration against a specific set of DNA profiles.   1 Hedman J, Nordgaard A, Rasmusson B, Ansell R,  Rådström P (2009), Improved forensic DNA analysis through the use of alternative DNA polymerases and statistical modelling of DNA profiles, Biotechniques 47, 351-358.
  •  
39.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • The likelihood ratio as value of evidence—more than a question of numbers
  • 2012
  • Ingår i: Law, Probability and Risk. - Oxford : Oxford University Press. - 1470-8396 .- 1470-840X. ; 11, s. 303-315
  • Tidskriftsartikel (refereegranskat)abstract
    • The ability of the experienced forensic scientist to evaluate his or her results given the circumstances and propositions in a particular case and present this to the court in a clear and concise way is very important for the legal process. Court officials can neither be expected to be able to interpret scientific data, nor is it their task to do so (in our opinion). The duty of the court is rather to perform the ultimate evidence evaluation of all the information in the case combined, including police reports, statements from suspects and victims, witness reports forensic expert statements, etc. Without the aid of the forensic expert, valuable forensic results may be overlooked or misinterpreted in this process. The scientific framework for forensic interpretation stems from Bayesian theory. The resulting likelihood ratio, which may be expressed using a verbal or a numerical scale, compares how frequent are the obtained results given that one of the propositions holds with how frequent  they are given that the other proposition holds. A common misunderstanding is that this approach must be restricted to forensic areas such as DNA evidence where extensive background information is present in the form of comprehensive databases. In this article we argue that the approach with likelihood ratios is equally applicable in areas where the results rely on scientific background data combined with the knowledge and experience of the forensic scientist. In such forensic areas the scale of the likelihood ratio may be rougher compared to a DNA case, but the information that is conveyed by the likelihood ratio may     nevertheless be highly valuable for the court.
  •  
40.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • The Multivariate Kernel Likelihood Ratio Method Applied on Comparison of Amphetamine Seizures
  • 2014
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Comparison of seizures of amphetamine with respect to their origins of illicit manufacturing can be done by investigating the amphetamine impurity pattern. Such an impurity pattern is a result of an incomplete cleaning-up process – typical for illicit manufacturing – when producing the drug. The manufacturing process can be divided into three steps: (1) choosing a recipe for how to produce; (2) producing amphetamine oil; and finally (3) precipitating the amphetamine from the oil.The impurity pattern of the amphetamine will depend on the recipe itself, the conditions used for the synthesis, the precipitation process and the method of cleaning-up. The impurity profile is a chromatogram of around 150 different contaminants, of these contaminants 26 have been used by several European countries in police intelligence work to link manufacturers of illicit drugs [1]. However, the linkage methods used are investigative and not evaluative.   The issue addressed when two specific seizures are to be compared, and the results are going to be used in the court of law, is whether they originate from the same precipitation batch.  When this is true the impurity patterns of the two seizures are in general expected to be similar, at least for stable contaminants. This is a less expected result if the seizures originate from different batches.Interpretation of observed similarities and differences between the impurity patterns of two seizures is still to a large extent based on subjective judgements where in Sweden the experiences of two forensic experts are used. In this presentation we show how the so-called multivariate kernel likelihood ratio approach [2] can be used for this interpretation. From a designed experiment comprising several recipes, the variance components for a subset or for a lower-dimensional projection of all contaminants are estimated and likelihood ratios can then be easily calculated. A cross-validatory study shows high sensitivity as well as high specificity of the likelihood ratios.
  •  
41.
  •  
42.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • Variance reduction for trend analysis
  • 2002
  • Ingår i: NORDSTAT 2002, Stockholm, Sweden.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • The concentrations of nutrients and other substances in a water body can be strongly influenced by random fluctuations in the mixing of waters of different origin. Hence, the water quality at given site can exhibit a large temporal variation that makes it difficult to extract anthropogenic signals from collected data. In this paper, we examine how the human impact on nutrient concentrations in such water bodies can be clarified by replacing conventional time series or geostatistical approaches by trend detection techniques in which we analyse the variation in nutrient concentrations with salinity and time. The general principles for the trend detection are illustrated with data from the Baltic Sea. The statistical significance of temporal changes in nutrient concentrations can be assessed by using parametric and nonparametric trend tests. In the recent past a nonparametric trend test with correction for covariates was proposed (Libiseller and Grimvall, 2002). This test, however, can best be applied if trends are monotone in time, which is not necessarily fulfilled for the original data. We therefore suggest that an overall trend test is computed as the weighted sum of trend test statistics computed for different salinity levels. By this means we receive a rather homogeneous time series in each subset, which considerably improves the power of the trend test. In the parametric approach we suggest a regression model, with Total Phosphorus concentration as the dependent variable and time (months) as the explaining variable. The residuals from this model output are most likely non-independent and non-normally distributed, and we will therefore apply bootstrap assessment of the estimated parameters.
  •  
43.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-43 av 43
Typ av publikation
konferensbidrag (25)
tidskriftsartikel (14)
rapport (2)
doktorsavhandling (1)
bokkapitel (1)
Typ av innehåll
övrigt vetenskapligt/konstnärligt (24)
refereegranskat (14)
populärvet., debatt m.m. (5)
Författare/redaktör
Nordgaard, Anders, 1 ... (42)
Ansell, Ricky (8)
Hedell, Ronny (3)
Grimvall, Anders (2)
Ahlinder, Jon (2)
Aitken, Colin (2)
visa fler...
Drotz, Weine (2)
Hedman, Johannes (2)
Libiseller, Claudia, ... (2)
Jaeger, Lars (2)
Höglund, Tobias (2)
Andersson, Kjell (1)
Wiklund Lindström, S ... (1)
Lindgren, Petter (1)
Forsman, Mats (1)
Taroni, Franco (1)
Biedermann, Alex (1)
Rasmusson, Birgitta, ... (1)
Widén, Christina (1)
Leitet, Elisabet (1)
Stenberg, Per, 1974- (1)
Johansson, Anders, 1 ... (1)
Myrtennäs, Kerstin (1)
Bovens, Michael (1)
Ahrens, Björn (1)
Alberink, Ivo (1)
Salonen, Tuomas (1)
Huhtala, Sami (1)
Grimvall, Anders, 19 ... (1)
Ekberg, Kajsa (1)
Emanuelson, Anna (1)
Rasmusson, Birgitta (1)
Wahlin, Karl (1)
Jonasson, Lennart (1)
Kadane, Joseph B. (1)
Nordgaard, Anders, D ... (1)
Libiseller, Claudia (1)
Du, Yang (1)
Hedberg, Karin (1)
Dahlhaus, Rainer, Pr ... (1)
Mannerskog, Susanne (1)
Svensson, Ulf (1)
Lövby, Märtha (1)
Brorsson Läthén, Kla ... (1)
Correll, Raymond (1)
Högberg, Carina (1)
visa färre...
Lärosäte
Linköpings universitet (43)
Umeå universitet (1)
Lunds universitet (1)
Språk
Engelska (39)
Svenska (4)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (42)
Samhällsvetenskap (18)
Medicin och hälsovetenskap (2)
Teknik (1)
Lantbruksvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy