SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Nosek Brian A.) "

Sökning: WFRF:(Nosek Brian A.)

  • Resultat 1-10 av 11
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Anderson, Christopher J., et al. (författare)
  • Response to Comment on "Estimating the reproducibility of psychological science"
  • 2016
  • Ingår i: Science. - : American Association for the Advancement of Science (AAAS). - 0036-8075 .- 1095-9203. ; 351:6277
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)abstract
    • Gilbert et al. conclude that evidence from the Open Science Collaboration's Reproducibility Project: Psychology indicates high reproducibility, given the study methodology. Their very optimistic assessment is limited by statistical misconceptions and by causal inferences from selectively interpreted, correlational data. Using the Reproducibility Project: Psychology data, both optimistic and pessimistic conclusions about reproducibility are possible, and neither are yet warranted.
  •  
2.
  • Ebersole, Charles R., et al. (författare)
  • Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability
  • 2020
  • Ingår i: Advances in Methods and Practices in Psychological Science. - : Sage. - 2515-2467 .- 2515-2459. ; 3:3, s. 309-331
  • Tidskriftsartikel (refereegranskat)abstract
    • Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3-9; median total sample = 1,279.5, range = 276-3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Delta r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00-.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19-.50).
  •  
3.
  • Aczel, Balazs, et al. (författare)
  • Consensus-based guidance for conducting and reporting multi-analyst studies
  • 2021
  • Ingår i: eLIFE. - : eLife Sciences Publications. - 2050-084X. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • Any large dataset can be analyzed in a number of ways, and it is possible that the use of different analysis strategies will lead to different results and conclusions. One way to assess whether the results obtained depend on the analysis strategy chosen is to employ multiple analysts and leave each of them free to follow their own approach. Here, we present consensus-based guidance for conducting and reporting such multi-analyst studies, and we discuss how broader adoption of the multi-analyst approach has the potential to strengthen the robustness of results and conclusions obtained from analyses of datasets in basic and applied research.
  •  
4.
  • Benjamin, Daniel J., et al. (författare)
  • Redefine statistical significance
  • 2018
  • Ingår i: Nature Human Behaviour. - : Nature Research (part of Springer Nature). - 2397-3374. ; 2:1, s. 6-10
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)
  •  
5.
  • Buckles, Grant, et al. (författare)
  • Using prediction markets to predict the outcomes in the Defense Advanced Research Projects Agency’s next-generation social science programme
  • 2021
  • Ingår i: Royal Society Open Science. - : Royal Society, The: Open Access / Royal Society. - 2054-5703 .- 2054-5703. ; 8:7, s. 181308-181308
  • Tidskriftsartikel (refereegranskat)abstract
    • There is evidence that prediction markets are useful tools to aggregate information on researchers’ beliefs about scientific results including the outcome of replications. In this study, we use prediction markets to forecast the results of novel experimental designs that test established theories. We set up prediction markets for hypotheses tested in the Defense Advanced Research Projects Agency’s (DARPA) Next Generation Social Science (NGS2) programme. Researchers were invited to bet on whether 22 hypotheses would be supported or not. We define support as a test result in the same direction as hypothesized, with a Bayes factor of at least 10 (i.e. a likelihood of the observed data being consistent with the tested hypothesis that is at least 10 times greater compared with the null hypothesis). In addition to betting on this binary outcome, we asked participants to bet on the expected effect size (in Cohen’s d) for each hypothesis. Our goal was to recruit at least 50 participants that signed up to participate in these markets. While this was the case, only 39 participants ended up actually trading. Participants also completed a survey on both the binary result and the effect size. We find that neither prediction markets nor surveys performed well in predicting outcomes for NGS2.
  •  
6.
  • Forsell, Eskil, et al. (författare)
  • Predicting replication outcomes in the Many Labs 2 study
  • 2019
  • Ingår i: Journal of Economic Psychology. - : Elsevier. - 1872-7719 .- 0167-4870. ; 75:Part A SI
  • Tidskriftsartikel (refereegranskat)abstract
    • Understanding and improving reproducibility is crucial for scientific progress. Prediction markets and related methods of eliciting peer beliefs are promising tools to predict replication outcomes. We invited researchers in the field of psychology to judge the replicability of 24 studies replicated in the large scale Many Labs 2 project. We elicited peer beliefs in prediction markets and surveys about two replication success metrics: the probability that the replication yields a statistically significant effect in the original direction (p < 0.001), and the relative effect size of the replication. The prediction markets correctly predicted 75% of the replication outcomes, and were highly correlated with the replication outcomes. Survey beliefs were also significantly correlated with replication outcomes, but had larger prediction errors. The prediction markets for relative effect sizes attracted little trading and thus did not work well. The survey beliefs about relative effect sizes performed better and were significantly correlated with observed relative effect sizes. The results suggest that replication outcomes can be predicted and that the elicitation of peer beliefs can increase our knowledge about scientific reproducibility and the dynamics of hypothesis testing.
  •  
7.
  • Isaksson, Siri, et al. (författare)
  • Using prediction markets to estimate the reproducibility of scientific research
  • 2015
  • Ingår i: Proceedings of the National Academy of Sciences. - : National Academy of Sciences. - 0027-8424 .- 1091-6490. ; 112:50, s. 15343-15347
  • Tidskriftsartikel (refereegranskat)abstract
    • Concerns about a lack of reproducibility of statistically significant results have recently been raised in many fields, and it has been argued that this lack comes at substantial economic costs. We here report the results from prediction markets set up to quantify the reproducibility of 44 studies published in prominent psychology journals and replicated in the Reproducibility Project: Psychology. The prediction markets predict the outcomes of the replications well and outperform a survey of market participants' individual forecasts. This shows that prediction markets are a promising tool for assessing the reproducibility of published scientific results. The prediction markets also allow us to estimate probabilities for the hypotheses being true at different testing stages, which provides valuable information regarding the temporal dynamics of scientific discovery. We find that the hypotheses being tested in psychology typically have low prior probabilities of being true (median, 9%) and that a "statistically significant" finding needs to be confirmed in a well-powered replication to have a high probability of being true. We argue that prediction markets could be used to obtain speedy information about reproducibility at low cost and could potentially even be used to determine which studies to replicate to optimally allocate limited resources into replications.
  •  
8.
  • Nosek, Brian A., et al. (författare)
  • National differences in gender-science stereotypes predict national sex differences in science and math achievement
  • 2009
  • Ingår i: Proceedings of the National Academy of Sciences of the United States of America. - : Proceedings of the National Academy of Sciences. - 0027-8424 .- 1091-6490. ; 106:26, s. 10593-10597
  • Tidskriftsartikel (refereegranskat)abstract
    • About 70% of more than half a million Implicit Association Tests   completed by citizens of 34 countries revealed expected implicit   stereotypes associating science with males more than with females. We   discovered that nation-level implicit stereotypes predicted   nation-level sex differences in 8th-grade science and mathematics   achievement. Self-reported stereotypes did not provide additional  predictive validity of the achievement gap. We suggest that implicit stereotypes and sex differences in science participation and   performance are mutually reinforcing, contributing to the persistent   gender gap in science engagement.
  •  
9.
  • Pfeiffer, Thomas, et al. (författare)
  • Predicting the replicability of social and behavioural science claims in a crisis: The COVID-19 Preprint Replication Project
  • 2023
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Replications are important for assessing the reliability of published findings. However, they are costly, and it is infeasible to replicate everything. Accurate, fast, lower-cost alternatives such as eliciting predictions could accelerate assessment for rapid policy implementation in a crisis. We elicited judgments from participants on 100 claims from preprints about an emerging area of research (COVID-19 pandemic) using an interactive structured elicitation protocol, and we conducted 29 new high-powered replications. After interacting with their peers, participant groups with lower task expertise (‘beginners’) updated their estimates and confidence in their judgements significantly more than groups with greater task expertise (‘experienced’). For experienced individuals, the average accuracy was 0.57 (95% CI: [0.53, 0.61]) after interaction, and they correctly classified 61% of claims; beginners’ average accuracy was 0.58 (95% CI: [0.54, 0.62]), correctly classifying 69% of claims. The difference in accuracy between groups was not statistically significant, and their judgments on the full set of claims were correlated (r=.48). These results suggest that both beginners and more experienced participants using a structured process have some ability to make better-than-chance predictions about the reliability of ‘fast science’ under conditions of high uncertainty. However, given the importance of such assessments for making evidence-based critical decisions in a crisis, more research is required to understand who the right experts in forecasting replicability are and how their judgements ought to be elicited.
  •  
10.
  • Protzko, John, et al. (författare)
  • High replicability of newly discovered social-behavioural findings is achievable
  • 2024
  • Ingår i: Nature Human Behaviour. - 2397-3374. ; 8:2, s. 311-319
  • Tidskriftsartikel (refereegranskat)abstract
    • Failures to replicate evidence of new discoveries have forced scientists to ask whether this unreliability is due to suboptimal implementation of methods or whether presumptively optimal methods are not, in fact, optimal. This paper reports an investigation by four coordinated laboratories of the prospective replicability of 16 novel experimental findings using rigour-enhancing practices: confirmatory tests, large sample sizes, preregistration and methodological transparency. In contrast to past systematic replication efforts that reported replication rates averaging 50%, replication attempts here produced the expected effects with significance testing (P < 0.05) in 86% of attempts, slightly exceeding the maximum expected replicability based on observed effect sizes and sample sizes. When one lab attempted to replicate an effect discovered by another lab, the effect size in the replications was 97% that in the original study. This high replication rate justifies confidence in rigour-enhancing methods to increase the replicability of new discoveries.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 11
Typ av publikation
tidskriftsartikel (10)
annan publikation (1)
Typ av innehåll
refereegranskat (8)
övrigt vetenskapligt/konstnärligt (3)
Författare/redaktör
Nosek, Brian A. (11)
Johannesson, Magnus (7)
Dreber Almenberg, An ... (6)
Munafò, Marcus R. (3)
Aczel, Balazs (2)
Szaszi, Barnabas (2)
visa fler...
Nilsonne, Gustav (2)
Holzmeister, Felix (2)
Jonas, Kai J. (2)
Kirchler, Michael (2)
Liu, Yang (1)
van den Akker, Olmo ... (1)
Albers, Casper J. (1)
van Assen, Marcel Al ... (1)
Bastiaansen, Jojanne ... (1)
Benjamin, Daniel (1)
Boehm, Udo (1)
Botvinik-Nezer, Rote ... (1)
Bringmann, Laura F. (1)
Busch, Niko A. (1)
Caruyer, Emmanuel (1)
Cataldo, Andrea M. (1)
Cowan, Nelson (1)
Delios, Andrew (1)
van Dongen, Noah N. ... (1)
Donkin, Chris (1)
van Doorn, Johnny B. (1)
Dutilh, Gilles (1)
Egan, Gary F. (1)
Gernsbacher, Morton ... (1)
Hoekstra, Rink (1)
Hoffmann, Sabine (1)
Huber, Juergen (1)
Kindel, Alexander T. (1)
Kunkels, Yoram K. (1)
Lindsay, D. Stephen (1)
Mangin, Jean-Francoi ... (1)
Matzke, Dora (1)
Newell, Ben R. (1)
Poldrack, Russell A. (1)
van Ravenzwaaij, Don (1)
Rieskamp, Jörg (1)
Salganik, Matthew J. (1)
Sarafoglou, Alexandr ... (1)
Schonberg, Tom (1)
Schweinsberg, Martin (1)
Shanks, David (1)
Silberzahn, Raphael (1)
Simons, Daniel J. (1)
Spellman, Barbara A. (1)
visa färre...
Lärosäte
Handelshögskolan i Stockholm (8)
Stockholms universitet (2)
Karolinska Institutet (2)
Göteborgs universitet (1)
Uppsala universitet (1)
Språk
Engelska (11)
Forskningsämne (UKÄ/SCB)
Samhällsvetenskap (9)
Naturvetenskap (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy