SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Kirchler Michael) "

Search: WFRF:(Kirchler Michael)

  • Result 1-10 of 58
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Menkveld, Albert J., et al. (author)
  • Nonstandard Errors
  • 2024
  • In: JOURNAL OF FINANCE. - : Wiley-Blackwell. - 0022-1082 .- 1540-6261. ; 79:3, s. 2339-2390
  • Journal article (peer-reviewed)abstract
    • In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty-nonstandard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for more reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants.
  •  
2.
  • Botvinik-Nezer, Rotem, et al. (author)
  • Variability in the analysis of a single neuroimaging dataset by many teams
  • 2020
  • In: Nature. - : Springer Science and Business Media LLC. - 0028-0836 .- 1476-4687. ; 582, s. 84-88
  • Journal article (peer-reviewed)abstract
    • Data analysis workflows in many scientific domains have become increasingly complex and flexible. Here we assess the effect of this flexibility on the results of functional magnetic resonance imaging by asking 70 independent teams to analyse the same dataset, testing the same 9 ex-ante hypotheses(1). The flexibility of analytical approaches is exemplified by the fact that no two teams chose identical workflows to analyse the data. This flexibility resulted in sizeable variation in the results of hypothesis tests, even for teams whose statistical maps were highly correlated at intermediate stages of the analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Notably, a meta-analytical approach that aggregated information across teams yielded a significant consensus in activated regions. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset(2-5). Our findings show that analytical flexibility can have substantial effects on scientific conclusions, and identify factors that may be related to variability in the analysis of functional magnetic resonance imaging. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for performing and reporting multiple analyses of the same data. Potential approaches that could be used to mitigate issues related to analytical variability are discussed. The results obtained by seventy different teams analysing the same functional magnetic resonance imaging dataset show substantial variation, highlighting the influence of analytical choices and the importance of sharing workflows publicly and performing multiple analyses.
  •  
3.
  • Benjamin, Daniel J., et al. (author)
  • Redefine statistical significance
  • 2018
  • In: Nature Human Behaviour. - : Nature Research (part of Springer Nature). - 2397-3374. ; 2:1, s. 6-10
  • Journal article (other academic/artistic)
  •  
4.
  • Menkveld, Albert J., et al. (author)
  • Non-Standard Errors
  • 2021
  • Other publication (other academic/artistic)abstract
    • In statistics, samples are drawn from a population in a data generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants.
  •  
5.
  • Menkveld, Albert J., et al. (author)
  • Non-Standard Errors
  • 2024
  • In: Journal of Finance. - 0022-1082. ; 79:3, s. 2339-2390
  • Journal article (peer-reviewed)abstract
    • In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty—nonstandard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for more reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants.
  •  
6.
  • Menkveld, Albert J., et al. (author)
  • Nonstandard Errors
  • 2024
  • In: Journal of Finance. - : Wiley-Blackwell. - 1540-6261 .- 0022-1082. ; 79:3, s. 2339-2390
  • Journal article (peer-reviewed)abstract
    • In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty—nonstandard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for more reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants.
  •  
7.
  • Pérignon, Christophe, et al. (author)
  • Reproducibility of Empirical Results : Evidence from 1,000 Tests in Finance
  • 2022
  • In: SSRN Electronic Journal. - Paris : HEC Paris. - 1556-5068.
  • Other publication (other academic/artistic)abstract
    • We analyze the computational reproducibility of more than 1,000 empirical answers to six research questions in finance provided by 168 international research teams. Surprisingly, neither researcher seniority, nor the quality of the research paper seem related to the level of reproducibility. Moreover, researchers exhibit strong overconfidence when assessing the reproducibility of their own research and underestimate the difficulty faced by their peers when attempting to reproduce their results. We further find that reproducibility is higher for researchers with better coding skills and for those exerting more effort. It is lower for more technical research questions and more complex code
  •  
8.
  • Aczel, Balazs, et al. (author)
  • Consensus-based guidance for conducting and reporting multi-analyst studies
  • 2021
  • In: eLIFE. - : eLife Sciences Publications. - 2050-084X. ; 10
  • Journal article (peer-reviewed)abstract
    • Any large dataset can be analyzed in a number of ways, and it is possible that the use of different analysis strategies will lead to different results and conclusions. One way to assess whether the results obtained depend on the analysis strategy chosen is to employ multiple analysts and leave each of them free to follow their own approach. Here, we present consensus-based guidance for conducting and reporting such multi-analyst studies, and we discuss how broader adoption of the multi-analyst approach has the potential to strengthen the robustness of results and conclusions obtained from analyses of datasets in basic and applied research.
  •  
9.
  • Altmejd, Adam, et al. (author)
  • Predicting the replicability of social science lab experiments
  • 2019
  • In: PLOS ONE. - : Public Library of Science (PLoS). - 1932-6203. ; 14:12
  • Journal article (peer-reviewed)abstract
    • We measure how accurately replication of experimental results can be predicted by black-box statistical models. With data from four large-scale replication projects in experimental psychology and economics, and techniques from machine learning, we train predictive models and study which variables drive predictable replication. The models predicts binary replication with a cross-validated accuracy rate of 70% (AUC of 0.77) and estimates of relative effect sizes with a Spearman ρ of 0.38. The accuracy level is similar to market-aggregated beliefs of peer scientists [1, 2]. The predictive power is validated in a pre-registered out of sample test of the outcome of [3], where 71% (AUC of 0.73) of replications are predicted correctly and effect size correlations amount to ρ = 0.25. Basic features such as the sample and effect sizes in original papers, and whether reported effects are single-variable main effects or two-variable interactions, are predictive of successful replication. The models presented in this paper are simple tools to produce cheap, prognostic replicability metrics. These models could be useful in institutionalizing the process of evaluation of new findings and guiding resources to those direct replications that are likely to be most informative.
  •  
10.
  • Brütt, Katharina, et al. (author)
  • Competition and moral behavior: A meta-analysis of forty-five crowd-sourced experimental designs
  • 2023
  • In: Proceedings of the National Academy of Sciences - PNAS. - : National Academy of Sciences. - 1091-6490 .- 0027-8424. ; 120:23
  • Journal article (peer-reviewed)abstract
    • Does competition affect moral behavior? This fundamental question has been debated among leading scholars for centuries, and more recently, it has been tested in experimental studies yielding a body of rather inconclusive empirical evidence. A potential source of ambivalent empirical results on the same hypothesis is design heterogeneity-variation in true effect sizes across various reasonable experimental research protocols. To provide further evidence on whether competition affects moral behavior and to examine whether the generalizability of a single experimental study is jeopardized by design heterogeneity, we invited independent research teams to contribute experimental designs to a crowd-sourced project. In a large-scale online data collection, 18,123 experimental participants were randomly allocated to 45 randomly selected experimental designs out of 95 submitted designs. We find a small adverse effect of competition on moral behavior in a meta-analysis of the pooled data. The crowd-sourced design of our study allows for a clean identification and estimation of the variation in effect sizes above and beyond what could be expected due to sampling variance. We find substantial design heterogeneity-estimated to be about 1.6 times as large as the average standard error of effect size estimates of the 45 research designs-indicating that the informativeness and generalizability of results based on a single experimental design are limited. Drawing strong conclusions about the underlying hypotheses in the presence of substantive design heterogeneity requires moving toward much larger data collections on various experimental designs testing the same hypothesis.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 58
Type of publication
journal article (51)
other publication (6)
reports (1)
Type of content
peer-reviewed (50)
other academic/artistic (8)
Author/Editor
Johannesson, Magnus (19)
Kirchler, Michael (18)
Holzmeister, Felix (15)
Dreber Almenberg, An ... (12)
Huber, Juergen (9)
Stefan, M (6)
show more...
Tinghög, Gustav, 197 ... (5)
Aczel, Balazs (4)
Botvinik-Nezer, Rote ... (4)
Sutter, Matthias, 19 ... (4)
Altmejd, Adam (4)
Camerer, Colin (4)
Lindner, F. (4)
Dreber, Anna (4)
Szaszi, Barnabas (3)
Nilsonne, Gustav (3)
Schonberg, Tom (3)
Vanpaemel, Wolf (3)
Wengström, Erik (3)
Västfjäll, Daniel (3)
Holmen, Martin, 1976 (3)
Albers, Casper J. (2)
Busch, Niko A. (2)
Cataldo, Andrea M. (2)
van Dongen, Noah N. ... (2)
Hoekstra, Rink (2)
Hoffmann, Sabine (2)
Mangin, Jean-Francoi ... (2)
Matzke, Dora (2)
Munafò, Marcus R. (2)
Nosek, Brian A. (2)
Poldrack, Russell A. (2)
van Ravenzwaaij, Don (2)
Sarafoglou, Alexandr ... (2)
Schweinsberg, Martin (2)
Simons, Daniel J. (2)
Spellman, Barbara A. (2)
Wicherts, Jelte (2)
Wagenmakers, Eric-Ja ... (2)
Wu, H. (2)
Forsell, Eskil (2)
Imai, Taisuke (2)
Nave, Gideon (2)
Voracek, Martin (2)
Andersson, David (2)
Västfjäll, Daniel, 1 ... (2)
Imai, T. (2)
Wilhelmsson, Anders (2)
Hanke, M. (2)
Ho, Teck-Hua (2)
show less...
University
University of Gothenburg (42)
Stockholm School of Economics (18)
Linköping University (6)
Stockholm University (5)
Lund University (5)
Karolinska Institutet (2)
Language
English (58)
Research subject (UKÄ/SCB)
Social Sciences (52)
Natural sciences (9)
Medical and Health Sciences (2)
Humanities (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view