SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Baník Gabriel) "

Sökning: WFRF:(Baník Gabriel)

  • Resultat 1-5 av 5
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ebersole, Charles R., et al. (författare)
  • Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability
  • 2020
  • Ingår i: Advances in Methods and Practices in Psychological Science. - : Sage. - 2515-2467 .- 2515-2459. ; 3:3, s. 309-331
  • Tidskriftsartikel (refereegranskat)abstract
    • Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3-9; median total sample = 1,279.5, range = 276-3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Delta r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00-.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19-.50).
  •  
2.
  • Grossmann, Igor, et al. (författare)
  • Insights into the accuracy of social scientists' forecasts of societal change
  • 2023
  • Ingår i: Nature Human Behaviour. - : Springer Nature. - 2397-3374. ; 7, s. 484-501
  • Tidskriftsartikel (refereegranskat)abstract
    • How well can social scientists predict societal change, and what processes underlie their predictions? To answer these questions, we ran two forecasting tournaments testing the accuracy of predictions of societal change in domains commonly studied in the social sciences: ideological preferences, political polarization, life satisfaction, sentiment on social media, and gender-career and racial bias. After we provided them with historical trend data on the relevant domain, social scientists submitted pre-registered monthly forecasts for a year (Tournament 1; N = 86 teams and 359 forecasts), with an opportunity to update forecasts on the basis of new data six months later (Tournament 2; N = 120 teams and 546 forecasts). Benchmarking forecasting accuracy revealed that social scientists' forecasts were on average no more accurate than those of simple statistical models (historical means, random walks or linear regressions) or the aggregate forecasts of a sample from the general public (N = 802). However, scientists were more accurate if they had scientific expertise in a prediction domain, were interdisciplinary, used simpler models and based predictions on prior data. How accurate are social scientists in predicting societal change, and what processes underlie their predictions? Grossmann et al. report the findings of two forecasting tournaments. Social scientists' forecasts were on average no more accurate than those of simple statistical models.
  •  
3.
  • Jones, Benedict C, et al. (författare)
  • To which world regions does the valence-dominance model of social perception apply?
  • 2021
  • Ingår i: Nature Human Behaviour. - : Springer Science and Business Media LLC. - 2397-3374. ; 5:1, s. 159-169
  • Tidskriftsartikel (refereegranskat)abstract
    • Over the past 10 years, Oosterhof and Todorov's valence-dominance model has emerged as the most prominent account of how people evaluate faces on social dimensions. In this model, two dimensions (valence and dominance) underpin social judgements of faces. Because this model has primarily been developed and tested in Western regions, it is unclear whether these findings apply to other regions. We addressed this question by replicating Oosterhof and Todorov's methodology across 11 world regions, 41 countries and 11,570 participants. When we used Oosterhof and Todorov's original analysis strategy, the valence-dominance model generalized across regions. When we used an alternative methodology to allow for correlated dimensions, we observed much less generalization. Collectively, these results suggest that, while the valence-dominance model generalizes very well across regions when dimensions are forced to be orthogonal, regional differences are revealed when we use different extraction methods and correlate and rotate the dimension reduction solution. PROTOCOL REGISTRATION: The stage 1 protocol for this Registered Report was accepted in principle on 5 November 2018. The protocol, as accepted by the journal, can be found at https://doi.org/10.6084/m9.figshare.7611443.v1 .
  •  
4.
  • ODonnell, Michael, et al. (författare)
  • Registered Replication Report: Dijksterhuis and van Knippenberg (1998)
  • 2018
  • Ingår i: Perspectives on Psychological Science. - : SAGE PUBLICATIONS LTD. - 1745-6916 .- 1745-6924. ; 13:2, s. 268-294
  • Tidskriftsartikel (refereegranskat)abstract
    • Dijksterhuis and van Knippenberg (1998) reported that participants primed with a category associated with intelligence (professor) subsequently performed 13% better on a trivia test than participants primed with a category associated with a lack of intelligence (soccer hooligans). In two unpublished replications of this study designed to verify the appropriate testing procedures, Dijksterhuis, van Knippenberg, and Holland observed a smaller difference between conditions (2%-3%) as well as a gender difference: Men showed the effect (9.3% and 7.6%), but women did not (0.3% and -0.3%). The procedure used in those replications served as the basis for this multilab Registered Replication Report. A total of 40 laboratories collected data for this project, and 23 of these laboratories met all inclusion criteria. Here we report the meta-analytic results for those 23 direct replications (total N = 4,493), which tested whether performance on a 30-item general-knowledge trivia task differed between these two priming conditions (results of supplementary analyses of the data from all 40 labs, N = 6,454, are also reported). We observed no overall difference in trivia performance between participants primed with the professor category and those primed with the hooligan category (0.14%) and no moderation by gender.
  •  
5.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-5 av 5

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy