SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Nordgaard Anders) "

Sökning: WFRF:(Nordgaard Anders)

  • Resultat 1-50 av 57
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  • Grimvall, Anders, et al. (författare)
  • Sjöar och vattendrag i Skåne– går utvecklingen åt rätt håll? : Statistisk utvärdering av vattenkvalitet ochprovtagningsprogram i Skåne län
  • 2002
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • 1. SammanfattningEn statistisk utvärdering av tidsserier av vattenkvalitetsdata från sjöar och vattendrag iSkåne län visar att det skett tydliga miljöförändringar under tjugoårsperioden 1981-2000. I två viktiga avseenden har vattenmiljön förbättrats. Försurningen har gåtttillbaka, åtminstone i vissa områden, och övergödningen av sjöar och vattendrag harkulminerat. Återhämtningen går dock långsamt och vissa andra förändringar sominträffat under studieperioden kan uppfattas som mindre önskvärda.Ljusförhållandena i sjöar och vattendrag har på vissa håll försämrats genom attmängden partiklar eller färgat organiskt material ökat. Det finns även en statistisktsäkerställd ökning av mängden syreförbrukande ämnen i vissa sjöar och vattendrag.En närmare granskning av försurningsrelaterade data visar att det främst är densyraneutraliserande kapaciteten (ANC-värdet) som ökat, medan förändringarna i pHär mindre systematiska. Dessutom föreligger stora skillnader mellan olikaavrinningsområden.Det nuvarande provtagnings- och analysprogrammet för sjöar och vattendrag i Skånelän är ändamålsenligt i den bemärkelsen att det kunnat klarlägga hur obetydligaförändringar från år till år ackumulerats till statistiskt säkerställda förändringar avvattenmiljön under en tjugoårsperiod. Det är också uppenbart att provtagning ochanalyser i de flesta fall skett med god noggrannhet. Bara ett mindre antal mätvärdenavviker på ett oförklarligt sätt från merparten av mätvärdena från samma plats, och delångtidstrender som identifierats för olika platser och tillståndsvariabler bildar med fåundantag ett trovärdigt mönster.Om provtagningen i vattendragen glesas ut från ett prov per månad till ett prov perkvartal minskar självfallet sannolikheten att miljöförändringar upptäcks. Grovt räknatkommer det då att ta ca 50 % längre tid att upptäcka en viss given årlig förändring avmiljötillståndet. Dessutom skulle programmet ge ett sämre underlag för att fastställaom de av riksdagen fastställda miljömålen uppfylls.
  •  
3.
  • Lindgren, Petter, et al. (författare)
  • A likelihood ratio-based approach for improved source attribution in microbiological forensic investigations
  • 2019
  • Ingår i: Forensic Science International. - : Elsevier. - 0379-0738 .- 1872-6283. ; 302
  • Tidskriftsartikel (refereegranskat)abstract
    • A common objective in microbial forensic investigations is to identify the origin of a recovered pathogenic bacterium by DNA sequencing. However, there is currently no consensus about how degrees of belief in such origin hypotheses should be quantified, interpreted, and communicated to wider audiences. To fill this gap, we have developed a concept based on calculating probabilistic evidential values for microbial forensic hypotheses. The likelihood-ratio method underpinning this concept is widely used in other forensic fields, such as human DNA matching, where results are readily interpretable and have been successfully communicated in juridical hearings. The concept was applied to two case scenarios of interest in microbial forensics: (1) identifying source cultures among series of very similar cultures generated by parallel serial passage of the Tier 1 pathogen Francisella tularensis, and (2) finding the production facilities of strains isolated in a real disease outbreak caused by the human pathogen Listeria monocytogenes. Evidence values for the studied hypotheses were computed based on signatures derived from whole genome sequencing data, including deep-sequenced low-frequency variants and structural variants such as duplications and deletions acquired during serial passages. In the F. tularensis case study, we were able to correctly assign fictive evidence samples to the correct culture batches of origin on the basis of structural variant data. By setting up relevant hypotheses and using data on cultivated batch sources to define the reference populations under each hypothesis, evidential values could be calculated. The results show that extremely similar strains can be separated on the basis of amplified mutational patterns identified by high-throughput sequencing. In the L. monocytogenes scenario, analyses of whole genome sequence data conclusively assigned the clinical samples to specific sources of origin, and conclusions were formulated to facilitate communication of the findings. Taken together, these findings demonstrate the potential of using bacterial whole genome sequencing data, including data on both low frequency SNP signatures and structural variants, to calculate evidence values that facilitate interpretation and communication of the results. The concept could be applied in diverse scenarios, including both epidemiological and forensic source tracking of bacterial infectious disease outbreaks. 
  •  
4.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • A resampling technique for estimating the power of non-parametric trend tests
  • 2006
  • Ingår i: Environmetrics. - : Wiley. - 1180-4009 .- 1099-095X. ; 17:3, s. 257-267
  • Tidskriftsartikel (refereegranskat)abstract
    • The power of Mann-Kendall tests and other non-parametric trend tests is normally estimated by performing Monte Carlo simulations in which artificial data are generated according to simple parametric models. Here we introduce a resampling technique for power assessments that can be fully automated and accommodate almost any variation in the collected time series data. A rank regression model is employed to extract error terms representing irregular variation in data that are collected over several seasons and may contain a non-linear trend. Thereafter, an autoregressive moving average (ARMA) bootstrap method is used to generate new time series of error terms for power simulations. A study of water quality data from two Swedish rivers illustrates how our method can provide site- and variable-specific information about the power of the Hirsch and Slack test for monotonic trends. In particular, we show how to clarify the impact of sampling frequency on the power of the trend tests. Copyright (c) 2006 John Wiley & Sons, Ltd.
  •  
5.
  •  
6.
  • Ahlinder, Jon, et al. (författare)
  • Chemometrics comes to court: evidence evaluation of chem–bio threat agent attacks
  • 2015
  • Ingår i: Journal of Chemometrics. - : John Wiley & Sons. - 0886-9383 .- 1099-128X. ; 29:5, s. 267-276
  • Tidskriftsartikel (refereegranskat)abstract
    • Forensic statistics is a well-established scientific field whose purpose is to statistically analyze evidence in order to support legal decisions. It traditionally relies on methods that assume small numbers of independent variables and multiple samples. Unfortunately, such methods are less applicable when dealing with highly correlated multivariate data sets such as those generated by emerging high throughput analytical technologies. Chemometrics is a field that has a wealth of methods for the analysis of such complex data sets, so it would be desirable to combine the two fields in order to identify best practices for forensic statistics in the future. This paper provides a brief introduction to forensic statistics and describes how chemometrics could be integrated with its established methods to improve the evaluation of evidence in court.The paper describes how statistics and chemometrics can be integrated, by analyzing a previous know forensic data set composed of bacterial communities from fingerprints. The presented strategy can be applied in cases where chemical and biological threat agents have been illegally disposed.
  •  
7.
  •  
8.
  •  
9.
  • Ansell, Ricky, et al. (författare)
  • Interpretation of DNA Evidence: Implications of Thresholds Used in the Forensic Laboratory
  • 2014
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Evaluation of forensic evidence is a process lined with decisions and balancing, not infrequently with a substantial deal of subjectivity. Already at the crime scene a lot of decisions have to be made about search strategies, the amount of evidence and traces recovered, later prioritised and sent further to the forensic laboratory etc. Within the laboratory there must be several criteria (often in terms of numbers) on how much and what parts of the material should be analysed. In addition there is often a restricted timeframe for delivery of a statement to the commissioner, which in reality might influence on the work done. The path of DNA evidence from the recovery of a trace at the crime scene to the interpretation and evaluation made in court involves several decisions based on cut-offs of different kinds. These include quality assurance thresholds like limits of detection and quantitation, but also less strictly defined thresholds like upper limits on prevalence of alleles not observed in DNA databases. In a verbal scale of conclusions there are lower limits on likelihood ratios for DNA evidence above which the evidence can be said to strongly support, very strongly support, etc. a proposition about the source of the evidence. Such thresholds may be arbitrarily chosen or based on logical reasoning with probabilities. However, likelihood ratios for DNA evidence depend strongly on the population of potential donors, and this may not be understood among the end-users of such a verbal scale. Even apparently strong DNA evidence against a suspect may be reported on each side of a threshold in the scale depending on whether a close relative is part of the donor population or not. In this presentation we review the use of thresholds and cut-offs in DNA analysis and interpretation and investigate the sensitivity of the final evaluation to how such rules are defined. In particular we show what are the effects of cut-offs when multiple propositions about alternative sources of a trace cannot be avoided, e.g. when there are close relatives to the suspect with high propensities to have left the trace. Moreover, we discuss the possibility of including costs (in terms of time or money) for a decision-theoretic approach in which expected values of information could be analysed.
  •  
10.
  • Bovens, Michael, et al. (författare)
  • Chemometrics in forensic chemistry — Part I: Implications to the forensic workflow
  • 2019
  • Ingår i: Forensic Science International. - : Elsevier BV. - 0379-0738 .- 1872-6283. ; 301, s. 82-90
  • Tidskriftsartikel (refereegranskat)abstract
    • The forensic literature shows a clear trend towards increasing use of chemometrics (i.e. multivariate analysis and other statistical methods). This can be seen in different disciplines such as drug profiling, arson debris analysis, spectral imaging, glass analysis, age determination, and more. In particular, current chemometric applications cover low-dimensional (e.g. drug impurity profiles) and high-dimensional data (e.g. Infrared and Raman spectra) and are therefore useful in many forensic disciplines. There is a dominant and increasing need in forensic chemistry for reliable and structured processing and interpretation of analytical data. This is especially true when classification (grouping) or profiling (batch comparison) is of interest.Chemometrics can provide additional information in complex crime cases and enhance productivity by improving the processes of data handling and interpretation in various applications. However, the use of chemometrics in everyday work tasks is often considered demanding by forensic scientists and, consequently, they are only reluctantly used. This article and following planned contributions are dedicated to those forensic chemists, interested in applying chemometrics but for any reasons are limited in the proper application of statistical tools — usually made for professionals — or the direct support of statisticians. Without claiming to be comprehensive, the literature reviewed revealed a sufficient overview towards the preferably used data handling and chemometric methods used to answer the forensic question. With this basis, a software tool will be designed (part of the EU project STEFA-G02) and handed out to forensic chemist with all necessary elements of data handling and evaluation.Because practical casework is less and less accompanied from the beginning to the end out of the same hand, more and more interfaces are built in through specialization of individuals. This article presents key influencing elements in the forensic workflow related to the most meaningful chemometric application and evaluation.
  •  
11.
  • Dahlman, Christian, et al. (författare)
  • Information Economics in the Criminal Standard of Proof
  • 2022
  • Ingår i: Law, Probability and Risk. - : Oxford University Press (OUP). - 1470-8396 .- 1470-840X. ; 21:3-4, s. 137-162
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper we model the criminal standard of proof as a twofold standard requiring sufficient probability of the factum probandum and sufficient informativeness. The focus of the paper is on the latter requirement, and we use decision theory to develop a model for sufficient informativeness. We demonstrate that sufficient informativeness is fundamentally a question of information economics and switch-ability. In our model, sufficient informativeness is a cost-benefit-analysis of further investigations that involves a prediction of the possibility that such investigations will produce evidence that switches the decision from conviction to acquittal. Critics of the Bayesian approach to legal evidence have claimed that ‘weight’ cannot be captured in a Bayesian model. Contrary to this claim, our model shows how sufficient informativeness can be modelled as a second order probability.
  •  
12.
  • Hedman, Johannes, et al. (författare)
  • A ranking index for quality assessment of forensic DNA profiles
  • 2010
  • Ingår i: BMC Research Notes. - : BioMed Central Ltd. - 1756-0500. ; 3:290
  • Tidskriftsartikel (refereegranskat)abstract
    • BackgroundAssessment of DNA profile quality is vital in forensic DNA analysis, both in order to determine the evidentiary value of DNA results and to compare the performance of different DNA analysis protocols. Generally the quality assessment is performed through manual examination of the DNA profiles based on empirical knowledge, or by comparing the intensities (allelic peak heights) of the capillary electrophoresis electropherograms.ResultsWe recently developed a ranking index for unbiased and quantitative quality assessment of forensic DNA profiles, the forensic DNA profile index (FI) (Hedman et al. Improved forensic DNA analysis through the use of alternative DNA polymerases and statistical modeling of DNA profiles, Biotechniques 47 (2009) 951-958). FI uses electropherogram data to combine the intensities of the allelic peaks with the balances within and between loci, using Principal Components Analysis. Here we present the construction of FI. We explain the mathematical and statistical methodologies used and present details about the applied data reduction method. Thereby we show how to adapt the ranking index for any Short Tandem Repeat-based forensic DNA typing system through validation against a manual grading scale and calibration against a specific set of DNA profiles.ConclusionsThe developed tool provides unbiased quality assessment of forensic DNA profiles. It can be applied for any DNA profiling system based on Short Tandem Repeat markers. Apart from crime related DNA analysis, FI can therefore be used as a quality tool in paternal or familial testing as well as in disaster victim identification.
  •  
13.
  • Hedman, Johannes, et al. (författare)
  • Improved forensic DNA analysis through the use of alternative DNA polymerases and statistical modeling of DNA profiles
  • 2009
  • Ingår i: BIOTECHNIQUES. - : Eaton Publishing. - 0736-6205 .- 1940-9818. ; 47:5, s. 951-958
  • Tidskriftsartikel (refereegranskat)abstract
    • DNA evidence, linking perpetrators to crime scenes, is central to many legal proceedings. However, DNA samples from crime scenes often contain PCR-inhibitory substances, which may generate blank or incomplete DNA profiles. Extensive DNA purification can be required to rid the sample of these inhibitors, although these procedures increase the risk of DNA loss. Most forensic laboratories use commercial DNA amplification kits (e.g., AmpFlSTR SGM Plus) with the DNA polymerase AmpliTaq Gold as the gold standard. Here, we show that alternative DNA polymerase-buffer systems can improve the quality of forensic DNA analysis and efficiently circumvent PCR inhibition in crime scene samples, without additional sample preparation. DNA profiles from 20 of 32 totally or partially inhibited crime scene saliva samples were significantly improved using Bio-X-Act Short, ExTaq Hot Start, or PicoMaxx High Fidelity instead of AmpliTaq Gold. A statistical model for unbiased quality control of forensic DNA profiles was developed to quantify the results. Our study demonstrates the importance of adjusting the chemistry of the PCR to enhance forensic DNA analysis and diagnostic PCR, providing an alternative to laborious sample preparation protocols.
  •  
14.
  • Hedman, Johannes, et al. (författare)
  • Synergy between DNA polymerases increases polymerase chain reaction inhibitor tolerance in forensic DNA analysis
  • 2010
  • Ingår i: Analytical Biochemistry. - - : Elsevier Inc.. - 0003-2697 .- 1096-0309. ; 405, s. 192-200
  • Tidskriftsartikel (refereegranskat)abstract
    • The success rate of diagnostic polymerase chain reaction (PCR) analysis is lowered by inhibitory substancespresent in the samples. Recently, we showed that tolerance to PCR inhibitors in crime scene salivastains can be improved by replacing the standard DNA polymerase AmpliTaq Gold with alternative DNApolymerase–buffer systems (Hedman et al., BioTechniques 47 (2009) 951–958). Here we show thatblending inhibitor-resistant DNA polymerase–buffer systems further increases the success rate of PCRfor various types of real crime scene samples showing inhibition. For 34 of 42 ‘‘inhibited” crime scenestains, the DNA profile quality was significantly improved using a DNA polymerase blend of ExTaq HotStart and PicoMaxx High Fidelity compared with AmpliTaq Gold. The significance of the results was confirmedby analysis of variance. The blend performed as well as, or better than, the alternative DNA polymerasesused separately for all tested sample types. When used separately, the performance of the DNApolymerases varied depending on the nature of the sample. The superiority of the blend is discussed interms of complementary effects and synergy between the DNA polymerase–buffer systems.
  •  
15.
  •  
16.
  • Kadane, Joseph B., et al. (författare)
  • Using Bayes factors to limit forensic testimony to forensics: composite hypotheses
  • 2024
  • Ingår i: Australian journal of forensic sciences. - : TAYLOR & FRANCIS LTD. - 0045-0618 .- 1834-562X.
  • Tidskriftsartikel (refereegranskat)abstract
    • In most western legal systems, only the fact-finder (judge or jury) is entrusted to make the ultimate decision in a criminal case. A forensic expert can help the fact-finder by opining on the weight of the forensic evidence given the hypotheses relevant to the case, but is not qualified to give an opinion about the ultimate question(s). When the question is reduced to two simple hypotheses, a Bayes Factor can express the expert's opinion about the extent to which the forensic evidence favours each hypothesis. This paper addresses the situation in which one or both of the hypotheses are composite, that is, embrace more than one possibility. It offers an interval of Bayes Factors, and shows that the proposed interval includes those values, and only those values, of the Bayes Factor supported by possible beliefs of the fact-finder. Shoe prints, tool marks and DNA are discussed in this light if the hypotheses used in the Bayes Factor are composite.
  •  
17.
  • Libiseller, Claudia, 1975-, et al. (författare)
  • Variance reduction for trend analysis of hydrochemical data from brackish waters
  • 2003
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • We propose one parametric and one non-parametric method for detection of monotone trends in nutrient concentrations in brackish waters. Both methods take into account that temporal variation in the quality of such waters can be strongly influenced by mixing of salt and fresh water, thus salinity is used as a classification variable in the trend analysis. With the non-parametric approach, Mann-Kendall statistics are calculated for each salinity level, and the parametric method involves the use of bootstrap estimates of the trend slope in a time series regression model. In both cases, tests for each salinity level are combined in an overall trend test.
  •  
18.
  •  
19.
  • Malmborg, Jonas, et al. (författare)
  • Forensic characterization of mid-range petroleum distillates using light biomarkers
  • 2016
  • Ingår i: Environmental Forensics. - : TAYLOR & FRANCIS LTD. - 1527-5922 .- 1527-5930. ; 17:3, s. 244-252
  • Tidskriftsartikel (refereegranskat)abstract
    • Due to oil refining, commonly used higher boiling biomarkers for oil-source correlation are absent from mid-range petroleum distillates, while lighter biomarkers are concentrated in such products. This study evaluated 63 diagnostic ratios of light biomarkers such as bicyclic sesquiterpanes, diamondoids, and lighter aromatic compounds using 70 diesel oil samples obtained from three Swedish refineries and local gas stations, mostly over a six-month period in 2015. On the basis of their diagnostic power and partial correlation coefficients, a set of 24 ratios is suggested for oil-source correlation of lighter products. The frequency of false positives for this set was determined to be approximately 0.1%. It should be emphasized that in the event of an oil spill, diesel oils are rapidly influenced by weathering and many of the ratios will be affected.
  •  
20.
  • Malmborg, Jonas, et al. (författare)
  • Transfer, persistence, contamination and background levels of inorganic gunshot residues
  • 2024
  • Ingår i: FORENSIC CHEMISTRY. - : ELSEVIER. - 2468-1709. ; 39
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper summarises the available literature data for the evidential evaluation topics of transfer (196 experiments), persistence (63 time series), contamination (1515 samples), and background prevalence (2158 samples) of inorganic gunshot residues (IGSR). In-house data on IGSR transfer, and the prevalence and persistence of IGSR on different types of glove are also contributed. Combining new and previously published data in a meta -analysis, we report the following findings: The median transfer rate of IGSR was 11 % and the probability distribution of contact transfer was modelled using a Beta distribution. The half-life of IGSR on hands was estimated at 52 min. On gloves, decay followed a two-phase process with the slower process proceeding at a decreased rate compared to on hands (t 1/2,slow = 77 min). The occurrence of characteristic IGSR on the hands of police officers was modelled using a generalised Pareto model (GPM). Combining the prevalence and the transfer probability models, a product probability distribution model was established. The product model estimates the probability of finding any amount of IGSR post -arrest on previously clean hands, estimating 8 % probability of non -zero transfer. Characteristic IGSR detected on the general public (1 % positives), on at -risk individuals (2 % positives), and in public places (0 % positives) showed low background levels. The likelihood of finding any amount of IGSR on the general public (at -risk included) was modelled using a GPM giving a 1.3 % probability of finding at least one, and 0.2 % probability to find more than three characteristic IGSR on the general public.
  •  
21.
  • Malmborg, Jonas, et al. (författare)
  • Validation of a feature-based likelihood ratio method for the SAILR software. Part I: Gas chromatography-mass spectrometry data for comparison of diesel oil samples
  • 2021
  • Ingår i: FORENSIC CHEMISTRY. - : ELSEVIER. - 2468-1709. ; 26
  • Tidskriftsartikel (refereegranskat)abstract
    • Statistical modelling of probability distributions from background data to arrive at a likelihood ratio (LR) is becoming more common in the forensic community. An open-source software called SAILR was recently launched by a European Union-funded project to provide forensic practitioners with a mathematical backbone in a user-friendly graphical interface. Before presenting values produced by the software as evidence in court, the LR method must be validated. In this study, a multivariate feature-based LR method for SAILR was validated using gas chromatography-mass spectrometry data from comparison of diesel oil samples. The validation strategy relied on use of specific performance characteristics (e.g., accuracy, discrimination, and calibration) and their corresponding metrics (e.g., cost of log-likelihood ratio and equal error rate). The validation also encompassed the normality assumption for within-source variation. Any deviation from the normality assumption was mitigated using Lambert W transformation of the data, which improved model performance. The LR method chosen for validation was optimized using background data, and a baseline method was simultaneously developed to provide the validation criteria. The results showed that the available data could support a trivariate (or lower) model. The LR method chosen for validation outperformed the baseline method according to the performance characteristics. Using the empirical lower and upper boundaries LR method, the output limits were determined to be 1/537 < LR < 1412. By passing the tests of normality and the validation criteria, the method was considered valid within this LR range for data of sufficient quality, and relevant to the background data set.
  •  
22.
  • Malmborg, Jonas, et al. (författare)
  • Validation of a feature-based likelihood ratio method for the SAILR software. Part II: Elemental compositional data for comparison of glass samples
  • 2022
  • Ingår i: FORENSIC CHEMISTRY. - : ELSEVIER. - 2468-1709. ; 27
  • Tidskriftsartikel (refereegranskat)abstract
    • SAILR is open-source software designed to calculate forensic likelihood ratios (LR) from probability distributions of reference data. The purpose of this study was to demonstrate validation of a multivariate feature-based LR method for SAILR using compositional data on glass fragments. Validation was performed using designated performance characteristics, e.g., accuracy, discrimination, and calibration. These characteristics were measured using performance metrics such as cost of the log likelihood ratio and equal error rate. The LR method was developed simultaneously to a baseline method having features less discriminating, but being better aligned with the normality assumption for within-source variation. The baseline method served as the floor of acceptable performance. The results showed that the available data supported LR methods using three elemental features or less. Best performance was obtained using calcium, magnesium, and silicon. The within-source variation in elemental features was slightly leptokurtic (heavy-tailed), violating the assumption of normality. The data were therefore normalized using Lambert W transformation and the performance of the LR method using normalized data was compared with that using non-normalized data. Although performance improved with normalization, the difference was small. Limits of LR output were set to 1/512 < LR < 158 using the empirical lower and upper boundaries (ELUB) LR method. This limited range was primarily a consequence of notable within-source variation. By passing the tests of normality and outperforming the baseline method, the method was considered valid for use in SAILR for data relevant to the background data set, using the defined range of LRs.
  •  
23.
  • Martyna, Agnieszka, et al. (författare)
  • Likelihood ratio-based probabilistic classifier
  • 2023
  • Ingår i: Chemometrics and Intelligent Laboratory Systems. - : ELSEVIER. - 0169-7439 .- 1873-3239. ; 240
  • Tidskriftsartikel (refereegranskat)abstract
    • Modern classification methods are likely to misclassify samples with rare but class-specific data that are more similar (less distant) to the data of another than the original class. This is because they tend to focus on the majority of data, leaving the information provided by the rare data practically ignored. Nevertheless, it is an invaluable source of information that should support classification of samples with such data, despite their low frequency. Current solutions considering the rarity information involve likelihood ratio models (LR). We intend to modify the existing LR models to establish the class membership for the analysed samples by comparing them with the samples of known class label. If two compared samples show similarities of rare but class-specific features it makes the analysed sample much more likely to be a member of this class than any other class, even when its features are less distant to the features of most samples from other classes. The fundamental advantage of the developed methodology is inclusion of information about rare, class-specific features, which is neglected by ordinary classifiers. Converting LR values into probabilities with which a sample belongs to the classes under consideration, generates a powerful tool within the concept of probabilistic classification.
  •  
24.
  • Nordgaard, Anders, 1962- (författare)
  • A resampling technique for estimating the power of non-parametric trend tests
  • 2004
  • Ingår i: COMPSTAT 2004, Prague, Czech Republic.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • The power of Mann-Kendall tests and other non-parametric trend tests is normally estimated by performing Monte-Carlo simulations in which artificial data are generated according to simple parametric models. Here we introduce a resampling technique for power assessments that can be fully automated and accommodate almost any variation incollected time series data. A rank regression model is employed to extract error terms representing irregular variation in data that have been gathered over several seasons and may contain a non-linear trend. Thereafter, an autoregressive bootstrap method is used to generate new time series of error terms for power simulations. These innovations are combined with trend and seasonal components from the fitted rank regression model, and the trend function can be resampled. We also describe a study of water quality data from two Swedish rivers to illustrate how our method can provide site- and variable-specific information about the power of the Hirsch and Slack test for monotonic trends. In particular, we show how our technique can clarify the impact of sampling frequency on the power of this type of trend test.
  •  
25.
  •  
26.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • Assessment of Approximate Likelihood Ratios from Continuous Distributions: A Case Study of Digital Camera Identification
  • 2011
  • Ingår i: Journal of Forensic Sciences. - : Wiley. - 0022-1198 .- 1556-4029. ; 56:2, s. 390-402
  • Tidskriftsartikel (refereegranskat)abstract
    • A reported likelihood ratio for the value of evidence is very often a point estimate based on various types of reference data. When presented in court, such frequentist likelihood ratio gets a higher scientific value if it is accompanied by an error bound. This becomes particularly important when the magnitude of the likelihood ratio is modest and thus is giving less support for the forwarded proposition. Here, we investigate methods for error bound estimation for the specific case of digital camera identification. The underlying probability distributions are continuous and previously proposed models for those are used, but the derived methodology is otherwise general. Both asymptotic and resampling distributions are applied in combination with different types of point estimators. The results show that resampling is preferable for assessment based on asymptotic distributions. Further, assessment of parametric estimators is superior to evaluation of kernel estimators when background data are limited.
  •  
27.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • Assessment of forensic findings when alternative explanations have different likelihoods—“Blame-the-brother”-syndrome
  • 2012
  • Ingår i: Science & justice. - : Elsevier. - 1355-0306 .- 1876-4452. ; 52:4, s. 226-236
  • Tidskriftsartikel (refereegranskat)abstract
    • Assessment of forensic findings with likelihood ratios is for several cases straightforward, but there are a number of situations where contemplation of the alternative explanation to the evidence needs consideration, in particular when it comes to the reporting of the evidentiary strength. The likelihood ratio approach cannot be directly applied to cases where the proposition alternative to the forwarded one is a set of multiple propositions with different likelihoods and different prior probabilities. Here we present a general framework based on the Bayes' factor as the quantitative measure of evidentiary strength from which it can be deduced whether the direct application of a likelihood ratio is reasonable or not. The framework is applied on DNA evidence in forms of an extension to previously published work. With the help of a scale of conclusions we provide a solution to the problem of communicating to the court the evidentiary strength of a DNA match when a close relative to the suspect has a non-negligible prior probability of being the source of the DNA.
  •  
28.
  • Nordgaard, Anders, et al. (författare)
  • Bevisresonemang och blandbilder i Leiden
  • 2014
  • Ingår i: Kriminalteknik. - Linköping : Statens kriminaltekniska laboratorium. - 1653-6169. ; :3, s. 23-
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)
  •  
29.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • ’Blame the brother’-Assessment of forensic DNA evidence when alternative explanations have different likelihoods
  • 2011
  • Ingår i: Book of Abstracts. ; , s. 196-
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • In a crime case where a suspect is assumed to be the donor of a recovered stain, forensic DNA evidence presented in terms of a likelihood ratio is a clear course as long as the set of alternative donors contains no close relative of the suspect, since the latter has a higher likelihood than has an individual unrelated to the suspect. The state-of-art today at several laboratories is to report the likelihood ratio but with a reservation stating its lack of validity if the stain originates from a close relative. Buckleton et al[†] derived a so-called extended likelihood ratio for reporting DNA evidence values when a full sibling is present in the set of potential alternative donors. This approach requires consideration of prior probabilities for each of the alternative donors to be the source of the stain and may therefore be problematic to apply in practice. Here we present an alternative way of using prior probabilities in the extended likelihood ratio when the latter is reported on an ordinal scale of conclusions. Our example show that for a 12 STR-marker profile using the extended likelihood ratio approach would not imply a change in the level reported compared to the ordinary likelihood ratio approach, unless the close relative has a very high prior probability of being the donor compared to an unrelated individual.[†] Buckleton JS, Triggs CM, Champod C., Science & Justice 46: 69-78. 
  •  
30.
  • Nordgaard, Anders, 1962- (författare)
  • Challenges in forensic evidence evaluation
  • 2012
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Interpretation andevaluation of forensic evidence is in essence a matter of probabilistic reasoning.The absence of models and sufficient background databases designed specificallyfor each particular forensic case make it a challenge to pursue such reasoning.However, with a coherent framework it is possible to reason with subjectiveprobabilities (subjective in the sense that they depend on the expert’sexperience and general knowledge) without leaving the court with a statementthat is merely the expert’s personal opinion. Bayesian reasoning, through theuse of Bayes factors (or very often likelihood ratios) constitutes such aframework. Here we present how the use of an ordinal scale for the Bayes factorcan allay the fear of subjectivity, and also how it can ease the problem ofevaluating evidence when there are multiple explanations for the forensicfindings with different likelihoods.
  •  
31.
  •  
32.
  • Nordgaard, Anders, 1962- (författare)
  • Classification of percentages in seizures of narcotic material
  • 2017
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • The percentage of the narcotic substance in a drug seizure may vary a lot depending on when and from whom the seizure was taken. Seizures from a typical consumer would in general show low percentages, while seizures from the early stages of a drug dealing chain would show higher percentages (these will be diluted). Legal fact finders must have an up-to-date picture of what is an expected level of the percentage and what levels are to be treated as unusually low or unusually high. This is important for the determination of the sentences to be given in a drug case.In this work we treat the probability distribution of the percentage of a narcotic substance in a seizure from year to year as a time series of beta density functions, which are successively updated with the use of point mass posteriors for the shape parameters. The predictive distribution for a new year is a weighted sum of beta distributions for the previous years where the weights are found from forward validation. We show that this method of prediction is more accurate than one that uses a predictive distribution built on a likelihood based on all previous years.
  •  
33.
  •  
34.
  •  
35.
  • Nordgaard, Anders, 1962- (författare)
  • Computer-intensive methods for dependent observations
  • 1996
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This dissertation deals with computer-intensive methods for dependent observations. The main part is built up by four papers defining and analyzing a resampling method of bootstrap type for the spectral domain of a stationary Gaussian sequence. The emphasis is on practical aspects as well as on asymptotic validity. The other part develops comprehensive models for statistical extrapolation of spatially collected data. The emphasis is on practical implementation and efficient model selection.The resampling method uses known asymptotic results for the spectral parts of a sample from a stationary sequence. The resampling is done completely in the spectral domain of the sequence and has separate procedures for amplitude and phase resampling. The latter property is a new concept. Some different strategies for the two parts of the resampling are suggested, including previously suggested amplitude resampling methods. As for the phase resampling, the methods are unique for the works included in this dissertation.The performance of the method is analyzed partly by comprehensive simulation studies, partly by studying asymptotic distributions of certain estimators. The simulation results are satisfactory and the asymptotic validity is achieved. Some open questions are discussed. The development of models for extrapolation starts from different assumptions on data. The most successful modelling is by treating the data as coming from a spatial stochastic process. A parametric correlation-structure is thus applied, resulting in heavy numerical estimation routines. Another part of the modelling is the assumption of a non-linear meanvalue function of data, preferably estimated by cubic spline regression functions. To choose between the different models a comprehensive cross-validation procedure is implemented.
  •  
36.
  •  
37.
  •  
38.
  •  
39.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • Indirect Evaluation by Simulation of a Bayesian Network
  • 2014
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Evidence evaluation when addressing source level propositions is usually done by comparing a piece of recovered material from (specimens of) control material. When the control material source is not available for taking specimens or for investigating it in its entirety, we must stick to photographs or video take-ups for making comparisons. An example is the comparison of class characteristics between a recovered footwear print and a picture of a seized shoe, where the evaluation is occasionally made that way. However, this way of pursuing the investigation is due to needs of quick answers, when there is no or little time to send in the entire footwear for the comparison. Moreover, the pictures taken of the sole of the seized footwear are taken by the police under controlled conditions and with high quality equipment.When the suspected source is captured on a lower quality video take-up and the recovered material consists of fragments from the original body of material – for instance fire debris – the comparison with the control material source is naturally more difficult. In this paper we present a case where the question is whether recovered fire debris originate from a piece of garment captured on a CCTV take-up. We show how a likelihood ratio for the two propositions can be indirectly obtained from a classification of the source of the fire debris, by using a Bayesian network model. Results from fire debris analysis as well as the results from image comparisons can be evaluated against propositions of class and the updating of the class node for fire debris propagates back to the propositions for source.Feeding the network with uniform priors for the class nodes we show how simulation can be used to obtain the correct level of the likelihood ratio for further reporting.
  •  
40.
  •  
41.
  •  
42.
  •  
43.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • Prediction of the distribution of thepercentages of narcotic substances in drug seizures
  • 2016
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • The percentage of the narcotic substance in a drug seizure may vary a lot depending on when and from whom the seizure was taken. Seizures from a typical consumer would in general show low percentages, while seizures from the early stages of a drug dealing chain would show higher percentages (these will be diluted). Historical records from the determination of the percentage of narcotic substance in seized drugs reveal that the mean percentage but also the variation of the percentage can differ between years. Some drugs show close to monotonic trends while others are more irregular in the temporal variation.Legal fact finders must have an up-to-date picture of what is an expected level of the percentage and what levels are to be treated as unusually low or unusually high. This is important for the determination of the sentences to be given in a drug case.In this work we treat the probability distribution of the percentage of a narcotic substance in a seizure from year to year as a time series of functions. The functions are probability density functions of beta distributions, which are successively updated with the use of point mass posteriors for the shape parameters. The predictive distribution for a new year is a weighted sum of beta distributions for the previous years where the weights are found from forward validation. We show that this method of prediction is more accurate than one that uses a predictive distribution built on a likelihood based on all previous years.
  •  
44.
  •  
45.
  • Nordgaard, Anders, 1962- (författare)
  • Quantifying experience in sample size determination for drug analysis
  • 2006
  • Ingår i: Law, Probability and Risk. - 1470-8396 .- 1470-840X. ; 4:4, s. 217-225
  • Tidskriftsartikel (refereegranskat)abstract
    •  Forensic analysis of pills suspected to contain illegal drugs is a time-consuming process; therefore, only a small sample from a seizure can be investigated. Notwithstanding, for drugs like Ecstacy, experience of forensic analysts indicates that a seizure of tablets usually consists either wholly of illicit drugs or no illegal substances at all. Consequently, it should be possible to draw fairly accurate conclusions based on very small samples, if all pills in a sample are indeed analytically identical. The forensic experience is modelled using a beta prior distribution for the proportion of drug-containing tablets in a seizure, and the sample size is determined so that a certain confidence statement can be made about this proportion. The parameters of the beta prior must be set to correspond with the experience, and this paper suggests a method for estimating these parameter values from a database comprising analyst reports representing the experience. The technique is applied to proportions of Ecstacy pills, and the results show that a sample of five pills is enough to state with a high level of confidence that at least half the tablets in a presumed Ecstacy seizure are genuine.
  •  
46.
  • Nordgaard, Anders, 1962- (författare)
  • Resampling species-wise abundance data
  • 2006
  • Ingår i: The 17th Annual Conference of The International Environmetrics Society (TIES), Kalmar, Sweden.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    •  Monitoring the abundance of plant species in grasslands is time-consuming. Accordingly, sampling or inspection is usually sparse both in time and space. Typically, a grassland area is visited 1-2 times per decade, and each time 5-20 plots are inspected. For each plot (about one square meter) an inspection protocol, containing coverage data for up to 100 species, is established. The collected data can thus be characterized as high-dimensional and sparse. Moreover, it is not unusual that some of the monitored species are present in only a few of the investigated plots, i.e., the vectors of coverage data may contain numerous zeroes. The analysis of abundance data can be either multivariate or univariate. Canonical correlation analysis (CCA) and redundancy analysis (RDA) are widely used multivariate methods. Univariate analyses are usually applied to summary statistics, such as diversity indices or measures of evenness. In either case, the complexity of the data makes it difficult to use parametric models for inference about the whole grassland, and modest sample sizes prevents using asymptotic results. Due to this, nonparametric methods, such as permutation tests, are often used to assess trends in abundance data. However, the power of these tests may be low due to the small number of sampling occasions. Here, we propose a resampling technique that can be used to determine the distribution of arbitrary estimators or test statistics based on high-dimensional abundance data. The original idea of the bootstrap is to substitute the true (but unknown) cumulative distribution function (cdf) for an empirical cumulative distribution function (edf) calculated from a sample of observations. When the collected data can be regarded as a simple random sample, the bootstrap principle provides a convenient method to determine the distributions of a large number of moment-related statistics (e.g. Singh, 1981). Also, it has been demonstrated that regression or time series data can be resampled by first extracting residuals (or innovations) and then forming pseudo data by resampling these residuals (Wu, 1989; Kreiss & Franke, 1992). We propose high-dimensional abundance data be resampled by extracting residuals from a principal components factor analysis in which a small number of factors are retained. Furthermore, we handle point masses at zero (absent species) by using a truncated probit function to transform the original data prior to the principal components factor analysis, and to back-transform the pseudo data. The threshold and the number of factors retained are determined in such a way that the most important features of the resampled data are similar to those of the original observations in the most important resoe. In particular, the number of observed species should not differ too much. The latter is achieved by using a subsampling procedure, in which the number of zeros (i.e. non-observed species) in a subsample and in pseudo-data from that subsample are compared. Also, relative biases and coverage degrees of empirical confidence intervals are optimized. The performance of our procedure is illustrated by extensive simulations and a case study of temporal changes in Shannon entropy in a grassland in South West Sweden.
  •  
47.
  •  
48.
  • Nordgaard, Anders, 1962- (författare)
  • Sample-size Determination for Analysis of Ecstasy Pills
  • 2005
  • Ingår i: The Sixth International Conference on Forensic Statistics, Tempe, AZ, USA.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • By experience, a seizure of pills that is under suspicion to contain drugs will very likely either consist entirely of drug pills of the same kind or consist of pills with no drug-content at all. If this experience could be quantified, it is possible to reduce the number of pills that must be selected for analysis. Recent results show that a Bayesian approach to sample size determination expresses the problem in a more natural way from a forensic point of view, provided an informative prior can be defined. Also, the sample sizes can be further reduced in this framework compared with the more classical Hypergeometric approach. These results have been adopted by the European Network of Forensic Sciences Institutes (ENFSI) in the Guideline on Representative Drug Sampling, published by ENFSI Drug Working Group. In this text, as well as in other published results, it is suggested to use a beta-prior which should be highly left-skewed when assumptions of the above kind can be done. In this paper we show how a beta-prior can be calculated from a data-base of analysed Ecstasy pills. By dividing the data-base items into different sub-populations, it is possible to estimate the parameter in the prior that controls the left-skewness of the distribution.
  •  
49.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • Sampling strategies
  • 2018. - 1
  • Ingår i: Integrated Analytical Approaches for Pesticide Management. - : Elsevier. - 9780128161555 - 9780128161562 ; , s. 31-46
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • The first step of a sampling strategy should be to clearly define its purpose. The aim typically is to obtain information for decision making. The decision rules typically involve estimates of the characteristics of the population (often the mean and standard deviation). The decision rules also require a definition of the population to be sampled. There are many techniques available for obtaining estimates of key characteristics of the population. The definition of population is critical but often nontrivial. There are many different sampling strategies available. The theoretically simplest scheme is simple random sampling and that can be used to provide estimates of the mean and standard deviation. More precise estimates can be obtained using stratified sampling. Cluster sampling is useful when there is significant travel time between the sampling units—however, except in simple cases, estimation of the mean and standard error requires expert input. Systematic sampling is simple to apply and gives precise estimates of the mean. Good estimates of the standard error are not available, but there are useful approximations.
  •  
50.
  • Nordgaard, Anders, 1962-, et al. (författare)
  • Scale of conclusions for the value of evidence
  • 2012
  • Ingår i: Law, Probability and Risk. - Oxford : Oxford University Press. - 1470-8396 .- 1470-840X. ; 11:1, s. 1-24
  • Tidskriftsartikel (refereegranskat)abstract
    • Scales of conclusion in forensic interpretation play an important role in the interface between scientific work at a forensic laboratory and different bodies of the jurisdictional system of a country. Of particular importance is the use of a unified scale that allows interpretation of different kinds of evidence in one common framework. The logical approach to forensic interpretation comprises the use of the likelihood ratio as a measure of evidentiary strength. While fully understood by forensic scientists, the likelihood ratio may be hard to interpret for a person not trained in natural sciences or mathematics. Translation of likelihood ratios to an ordinal scale including verbal counterparts of the levels is therefore a necessary procedure for communicating evidence values to the police and in the courtroom. In this paper, we present a method to develop an ordinal scale for the value of evidence that can be applied to any type of forensic findings. The method is built on probabilistic reasoning about the interpretation of findings and the number of scale levels chosen is a compromise between a pragmatic limit and mathematically well-defined distances between levels. The application of the unified scale is illustrated by a number of case studies.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 57
Typ av publikation
konferensbidrag (26)
tidskriftsartikel (26)
rapport (3)
doktorsavhandling (1)
bokkapitel (1)
Typ av innehåll
övrigt vetenskapligt/konstnärligt (28)
refereegranskat (24)
populärvet., debatt m.m. (5)
Författare/redaktör
Nordgaard, Anders, 1 ... (42)
Nordgaard, Anders (14)
Ansell, Ricky (12)
Hedman, Johannes (4)
Malmborg, Jonas. (4)
Rasmusson, Birgitta (4)
visa fler...
Grimvall, Anders (3)
Aitken, Colin (3)
Hedell, Ronny (3)
Jaeger, Lars (3)
Ahlinder, Jon (2)
Drotz, Weine (2)
Rådström, Peter (2)
Libiseller, Claudia, ... (2)
Höglund, Tobias (2)
Andersson, Kjell (1)
Larsson, Magnus (1)
Bladh, Marie (1)
Sydsjö, Gunilla (1)
Wiklund Lindström, S ... (1)
Lindgren, Petter (1)
Forsman, Mats (1)
Taroni, Franco (1)
Biedermann, Alex (1)
Hedman, J. (1)
Rasmusson, Birgitta, ... (1)
Widén, Christina (1)
Dahlman, Christian (1)
Leitet, Elisabet (1)
Stenberg, Per, 1974- (1)
Johansson, Anders, 1 ... (1)
Myrtennäs, Kerstin (1)
Bovens, Michael (1)
Ahrens, Björn (1)
Alberink, Ivo (1)
Salonen, Tuomas (1)
Huhtala, Sami (1)
Grimvall, Anders, 19 ... (1)
Dufva, Charlotte (1)
Kvist, Ulrik (1)
Ekberg, Kajsa (1)
Emanuelson, Anna (1)
Wahlin, Karl (1)
Rådström, P. (1)
Jonasson, Lennart (1)
Kadane, Joseph B. (1)
Nordgaard, Anders, D ... (1)
Libiseller, Claudia (1)
Martyna, Agnieszka (1)
Du, Yang (1)
visa färre...
Lärosäte
Linköpings universitet (56)
Lunds universitet (4)
Umeå universitet (1)
Naturvårdsverket (1)
Karolinska Institutet (1)
Språk
Engelska (51)
Svenska (6)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (51)
Samhällsvetenskap (19)
Teknik (4)
Medicin och hälsovetenskap (3)
Lantbruksvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy