SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Huber Jürgen) srt2:(2020-2024)"

Sökning: WFRF:(Huber Jürgen) > (2020-2024)

  • Resultat 1-6 av 6
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Brütt, Katharina, et al. (författare)
  • Competition and moral behavior: A meta-analysis of forty-five crowd-sourced experimental designs
  • 2023
  • Ingår i: Proceedings of the National Academy of Sciences - PNAS. - : National Academy of Sciences. - 1091-6490 .- 0027-8424. ; 120:23
  • Tidskriftsartikel (refereegranskat)abstract
    • Does competition affect moral behavior? This fundamental question has been debated among leading scholars for centuries, and more recently, it has been tested in experimental studies yielding a body of rather inconclusive empirical evidence. A potential source of ambivalent empirical results on the same hypothesis is design heterogeneity-variation in true effect sizes across various reasonable experimental research protocols. To provide further evidence on whether competition affects moral behavior and to examine whether the generalizability of a single experimental study is jeopardized by design heterogeneity, we invited independent research teams to contribute experimental designs to a crowd-sourced project. In a large-scale online data collection, 18,123 experimental participants were randomly allocated to 45 randomly selected experimental designs out of 95 submitted designs. We find a small adverse effect of competition on moral behavior in a meta-analysis of the pooled data. The crowd-sourced design of our study allows for a clean identification and estimation of the variation in effect sizes above and beyond what could be expected due to sampling variance. We find substantial design heterogeneity-estimated to be about 1.6 times as large as the average standard error of effect size estimates of the 45 research designs-indicating that the informativeness and generalizability of results based on a single experimental design are limited. Drawing strong conclusions about the underlying hypotheses in the presence of substantive design heterogeneity requires moving toward much larger data collections on various experimental designs testing the same hypothesis.
  •  
2.
  • Holzmeister, Felix, et al. (författare)
  • Heterogeneity in effect size estimates : Empirical evidence and practical implications
  • 2023
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • A typical empirical study involves choosing a sample, a research design, and an analysis path. Variation in such choices across studies leads to heterogeneity in results that introduce an additional layer of uncertainty not accounted for in reported standard errors and confidence intervals. We provide a framework for studying heterogeneity in the social sciences and divide heterogeneity into population heterogeneity, design heterogeneity, and analytical heterogeneity. We estimate each type's heterogeneity from multi-lab replication studies, prospective meta-analyses of studies varying experimental designs, and multi-analyst studies. Our results suggest that population heterogeneity tends to be relatively small, whereas design and analytical heterogeneity are large. A conservative interpretation of the estimates suggests that incorporating the uncertainty due to heterogeneity would approximately double sample standard errors and confidence intervals. We illustrate that heterogeneity of this magnitude — unless properly accounted for —has severe implications for statistical inference with strongly increased rates of false scientific claims.
  •  
3.
  • Kim, Sora, et al. (författare)
  • Probing the ecology and climate of the Eocene Southern Ocean with sand tiger sharks Striatolamia macrota
  • 2020
  • Ingår i: Paleoceanography and Paleoclimatology. - : American Geophysical Union (AGU). - 2572-4517 .- 2572-4525. ; 35:12
  • Tidskriftsartikel (refereegranskat)abstract
    • Many explanations for Eocene climate change focus on the Southern Ocean—where tectonics influenced oceanic gateways, ocean circulation reduced heat transport, and greenhouse gas declines prompted glaciation. To date, few studies focus on marine vertebrates at high latitudes to discern paleoecological and paleoenvironmental impacts of this climate transition. The Tertiary Eocene La Meseta (TELM) Formation has a rich fossil assemblage to characterize these impacts; Striatolamia macrota, an extinct (†) sand tiger shark, is abundant throughout the La Meseta Formation. Body size is often tracked to characterize and integrate across multiple ecological dimensions. †S. macrota body size distributions indicate limited changes during TELMs 2–5 based on anterior tooth crown height (n = 450, mean = 19.6 ± 6.4 mm). Similarly, environmental conditions remained stable through this period based on δ18OPO4 values from tooth enameloid (n = 42; 21.5 ± 1.6‰), which corresponds to a mean temperature of 22.0 ± 4.0°C. Our preliminary εNd (n = 4) results indicate an early Drake Passage opening with Pacific inputs during TELM 2–3 (45–43 Ma) based on single unit variation with an overall radiogenic trend. Two possible hypotheses to explain these observations are (1) †S. macrota modified its migration behavior to ameliorate environmental changes related to the Drake Passage opening, or (2) the local climate change was small and gateway opening had little impact. While we cannot rule out an ecological explanation, a comparison with climate model results suggests that increased CO2 produces warm conditions that also parsimoniously explain the observations.
  •  
4.
  • Menkveld, Albert J., et al. (författare)
  • Nonstandard Errors
  • 2024
  • Ingår i: Journal of Finance. - : Wiley-Blackwell. - 1540-6261 .- 0022-1082. ; 79:3, s. 2339-2390
  • Tidskriftsartikel (refereegranskat)abstract
    • In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty—nonstandard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for more reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants.
  •  
5.
  • Pérignon, Christophe, et al. (författare)
  • Reproducibility of Empirical Results : Evidence from 1,000 Tests in Finance
  • 2022
  • Ingår i: SSRN Electronic Journal. - Paris : HEC Paris. - 1556-5068.
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • We analyze the computational reproducibility of more than 1,000 empirical answers to six research questions in finance provided by 168 international research teams. Surprisingly, neither researcher seniority, nor the quality of the research paper seem related to the level of reproducibility. Moreover, researchers exhibit strong overconfidence when assessing the reproducibility of their own research and underestimate the difficulty faced by their peers when attempting to reproduce their results. We further find that reproducibility is higher for researchers with better coding skills and for those exerting more effort. It is lower for more technical research questions and more complex code
  •  
6.
  • Uhlmann, Eric, L., et al. (författare)
  • Subjective Evidence Evaluation Survey For Multi-Analyst Studies
  • 2024
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Multi-analyst studies explore how well an empirical claim withstands plausible alternative analyses of the same data set by multiple, independent analysis teams. Conclusions from these studies typically rely on a single outcome metric (e.g., effect size) provided by each analysis team. Although informative about the range of plausible effects in a data set, a single effect size from each team does not provide a complete, nuanced understanding of how analysis choices are related to the outcome. We used the Delphi consensus technique with input from 37 experts to develop an 18-item Subjective Evidence Evaluation Survey (SEES) to evaluate how each analysis team views the methodological appropriateness of the research design and the strength of evidence for the hypothesis. We illustrate the usefulness of the SEES in providing richer evidence assessment with pilot data from a previous multi-analyst study.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-6 av 6

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy