SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Nichols Thomas E.) ;lar1:(liu)"

Search: WFRF:(Nichols Thomas E.) > Linköping University

  • Result 1-4 of 4
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Botvinik-Nezer, Rotem, et al. (author)
  • Variability in the analysis of a single neuroimaging dataset by many teams
  • 2020
  • In: Nature. - : Springer Science and Business Media LLC. - 0028-0836 .- 1476-4687. ; 582, s. 84-88
  • Journal article (peer-reviewed)abstract
    • Data analysis workflows in many scientific domains have become increasingly complex and flexible. Here we assess the effect of this flexibility on the results of functional magnetic resonance imaging by asking 70 independent teams to analyse the same dataset, testing the same 9 ex-ante hypotheses(1). The flexibility of analytical approaches is exemplified by the fact that no two teams chose identical workflows to analyse the data. This flexibility resulted in sizeable variation in the results of hypothesis tests, even for teams whose statistical maps were highly correlated at intermediate stages of the analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Notably, a meta-analytical approach that aggregated information across teams yielded a significant consensus in activated regions. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset(2-5). Our findings show that analytical flexibility can have substantial effects on scientific conclusions, and identify factors that may be related to variability in the analysis of functional magnetic resonance imaging. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for performing and reporting multiple analyses of the same data. Potential approaches that could be used to mitigate issues related to analytical variability are discussed. The results obtained by seventy different teams analysing the same functional magnetic resonance imaging dataset show substantial variation, highlighting the influence of analytical choices and the importance of sharing workflows publicly and performing multiple analyses.
  •  
2.
  • Eklund, Anders, 1981-, et al. (author)
  • Cluster failure revisited: Impact of first level design and physiological noise on cluster false positive rates
  • 2019
  • In: Human Brain Mapping. - : Wiley. - 1065-9471 .- 1097-0193. ; 40:7, s. 2017-2032
  • Journal article (peer-reviewed)abstract
    • Methodological research rarely generates a broad interest, yet our work on the validity of cluster inference methods for functional magnetic resonance imaging (fMRI) created intense discussion on both the minutia of our approach and its implications for the discipline. In the present work, we take on various critiques of our work and further explore the limitations of our original work. We address issues about the particular event‐related designs we used, considering multiple event types and randomization of events between subjects. We consider the lack of validity found with one‐sample permutation (sign flipping) tests, investigating a number of approaches to improve the false positive control of this widely used procedure. We found that the combination of a two‐sided test and cleaning the data using ICA FIX resulted in nominal false positive rates for all data sets, meaning that data cleaning is not only important for resting state fMRI, but also for task fMRI. Finally, we discuss the implications of our work on the fMRI literature as a whole, estimating that at least 10% of the fMRI studies have used the most problematic cluster inference method (p = .01 cluster defining threshold), and how individual studies can be interpreted in light of our findings. These additional results underscore our original conclusions, on the importance of data sharing and thorough evaluation of statistical methods on realistic null data.
  •  
3.
  • Eklund, Anders, 1981-, et al. (author)
  • Reply to Chen et al.: Parametric methods for cluster inference perform worse for two‐sided t‐tests
  • 2019
  • In: Human Brain Mapping. - : Wiley. - 1065-9471 .- 1097-0193. ; 40:5, s. 1689-1691
  • Journal article (pop. science, debate, etc.)abstract
    • One‐sided t‐tests are commonly used in the neuroimaging field, but two‐sided tests should be the default unless a researcher has a strong reason for using a one‐sided test. Here we extend our previous work on cluster false positive rates, which used one‐sided tests, to two‐sided tests. Briefly, we found that parametric methods perform worse for two‐sided t‐tests, and that nonparametric methods perform equally well for one‐sided and two‐sided tests.
  •  
4.
  • Nichols, Thomas E., et al. (author)
  • Comments: A defense of using resting state fMRI as null data for estimating false positive rates
  • 2017
  • In: Cognitive Neuroscience. - : Taylor & Francis. - 1758-8928 .- 1758-8936. ; 8:3, s. 144-145
  • Journal article (other academic/artistic)abstract
    • A recent Editorial by Slotnick (2017) reconsiders the findings of our paper on the accuracy of false positive rate control with cluster inference in fMRI (Eklund et al, 2016), in particular criticising our use of resting state fMRI data as a source for null data in the evaluation of task fMRI methods. We defend this use of resting fMRI data, as while there is much structure in this data, we argue it is representative of task data noise and such analysis software should be able to accommodate this noise. We also discuss a potential problem with Slotnick’s own method.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-4 of 4

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view