SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Ropovik Ivan) "

Sökning: WFRF:(Ropovik Ivan)

  • Resultat 1-6 av 6
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Schweinsberg, Martin, et al. (författare)
  • Same data, different conclusions : Radical dispersion in empirical results when independent analysts operationalize and test the same hypothesis
  • 2021
  • Ingår i: Organizational Behavior and Human Decision Processes. - : Elsevier BV. - 0749-5978 .- 1095-9920. ; 165, s. 228-249
  • Tidskriftsartikel (refereegranskat)abstract
    • In this crowdsourced initiative, independent analysts used the same dataset to test two hypotheses regarding the effects of scientists' gender and professional status on verbosity during group meetings. Not only the analytic approach but also the operationalizations of key variables were left unconstrained and up to individual analysts. For instance, analysts could choose to operationalize status as job title, institutional ranking, citation counts, or some combination. To maximize transparency regarding the process by which analytic choices are made, the analysts used a platform we developed called DataExplained to justify both preferred and rejected analytic paths in real time. Analyses lacking sufficient detail, reproducible code, or with statistical errors were excluded, resulting in 29 analyses in the final sample. Researchers reported radically different analyses and dispersed empirical outcomes, in a number of cases obtaining significant effects in opposite directions for the same research question. A Boba multiverse analysis demonstrates that decisions about how to operationalize variables explain variability in outcomes above and beyond statistical choices (e.g., covariates). Subjective researcher decisions play a critical role in driving the reported empirical results, underscoring the need for open data, systematic robustness checks, and transparency regarding both analytic paths taken and not taken. Implications for orga-nizations and leaders, whose decision making relies in part on scientific findings, consulting reports, and internal analyses by data scientists, are discussed.
  •  
2.
  • Ebersole, Charles R., et al. (författare)
  • Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability
  • 2020
  • Ingår i: Advances in Methods and Practices in Psychological Science. - : Sage. - 2515-2467 .- 2515-2459. ; 3:3, s. 309-331
  • Tidskriftsartikel (refereegranskat)abstract
    • Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3-9; median total sample = 1,279.5, range = 276-3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Delta r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00-.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19-.50).
  •  
3.
  • Grossmann, Igor, et al. (författare)
  • Insights into the accuracy of social scientists' forecasts of societal change
  • 2023
  • Ingår i: Nature Human Behaviour. - : Springer Nature. - 2397-3374. ; 7, s. 484-501
  • Tidskriftsartikel (refereegranskat)abstract
    • How well can social scientists predict societal change, and what processes underlie their predictions? To answer these questions, we ran two forecasting tournaments testing the accuracy of predictions of societal change in domains commonly studied in the social sciences: ideological preferences, political polarization, life satisfaction, sentiment on social media, and gender-career and racial bias. After we provided them with historical trend data on the relevant domain, social scientists submitted pre-registered monthly forecasts for a year (Tournament 1; N = 86 teams and 359 forecasts), with an opportunity to update forecasts on the basis of new data six months later (Tournament 2; N = 120 teams and 546 forecasts). Benchmarking forecasting accuracy revealed that social scientists' forecasts were on average no more accurate than those of simple statistical models (historical means, random walks or linear regressions) or the aggregate forecasts of a sample from the general public (N = 802). However, scientists were more accurate if they had scientific expertise in a prediction domain, were interdisciplinary, used simpler models and based predictions on prior data. How accurate are social scientists in predicting societal change, and what processes underlie their predictions? Grossmann et al. report the findings of two forecasting tournaments. Social scientists' forecasts were on average no more accurate than those of simple statistical models.
  •  
4.
  • Jones, Benedict C, et al. (författare)
  • To which world regions does the valence-dominance model of social perception apply?
  • 2021
  • Ingår i: Nature Human Behaviour. - : Springer Science and Business Media LLC. - 2397-3374. ; 5:1, s. 159-169
  • Tidskriftsartikel (refereegranskat)abstract
    • Over the past 10 years, Oosterhof and Todorov's valence-dominance model has emerged as the most prominent account of how people evaluate faces on social dimensions. In this model, two dimensions (valence and dominance) underpin social judgements of faces. Because this model has primarily been developed and tested in Western regions, it is unclear whether these findings apply to other regions. We addressed this question by replicating Oosterhof and Todorov's methodology across 11 world regions, 41 countries and 11,570 participants. When we used Oosterhof and Todorov's original analysis strategy, the valence-dominance model generalized across regions. When we used an alternative methodology to allow for correlated dimensions, we observed much less generalization. Collectively, these results suggest that, while the valence-dominance model generalizes very well across regions when dimensions are forced to be orthogonal, regional differences are revealed when we use different extraction methods and correlate and rotate the dimension reduction solution. PROTOCOL REGISTRATION: The stage 1 protocol for this Registered Report was accepted in principle on 5 November 2018. The protocol, as accepted by the journal, can be found at https://doi.org/10.6084/m9.figshare.7611443.v1 .
  •  
5.
  • Moshontz, Hannah, et al. (författare)
  • The Psychological Science Accelerator: Advancing Psychology Through a Distributed Collaborative Network
  • 2018
  • Ingår i: Advances in Methods and Practices in Psychological Science. - : SAGE Publications. - 2515-2459 .- 2515-2467. ; 1:4, s. 501-515
  • Tidskriftsartikel (refereegranskat)abstract
    • Concerns about the veracity of psychological research have been growing. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions or replicate prior research in large, diverse samples. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time limited), efficient (in that structures and principles are reused for different projects), decentralized, diverse (in both subjects and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside the network). The PSA and other approaches to crowdsourced psychological science will advance understanding of mental processes and behaviors by enabling rigorous research and systematic examination of its generalizability.
  •  
6.
  • ODonnell, Michael, et al. (författare)
  • Registered Replication Report: Dijksterhuis and van Knippenberg (1998)
  • 2018
  • Ingår i: Perspectives on Psychological Science. - : SAGE PUBLICATIONS LTD. - 1745-6916 .- 1745-6924. ; 13:2, s. 268-294
  • Tidskriftsartikel (refereegranskat)abstract
    • Dijksterhuis and van Knippenberg (1998) reported that participants primed with a category associated with intelligence (professor) subsequently performed 13% better on a trivia test than participants primed with a category associated with a lack of intelligence (soccer hooligans). In two unpublished replications of this study designed to verify the appropriate testing procedures, Dijksterhuis, van Knippenberg, and Holland observed a smaller difference between conditions (2%-3%) as well as a gender difference: Men showed the effect (9.3% and 7.6%), but women did not (0.3% and -0.3%). The procedure used in those replications served as the basis for this multilab Registered Replication Report. A total of 40 laboratories collected data for this project, and 23 of these laboratories met all inclusion criteria. Here we report the meta-analytic results for those 23 direct replications (total N = 4,493), which tested whether performance on a 30-item general-knowledge trivia task differed between these two priming conditions (results of supplementary analyses of the data from all 40 labs, N = 6,454, are also reported). We observed no overall difference in trivia performance between participants primed with the professor category and those primed with the hooligan category (0.14%) and no moderation by gender.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-6 av 6
Typ av publikation
tidskriftsartikel (6)
Typ av innehåll
refereegranskat (6)
Författare/redaktör
Ropovik, Ivan (6)
Baskin, Ernest (5)
Aczel, Balazs (4)
Chartier, Christophe ... (4)
Saunders, Blair (4)
Banik, Gabriel (4)
visa fler...
Protzko, John (4)
Levitan, Carmel A. (3)
Miller, Jeremy K. (3)
Schmidt, Kathleen (3)
Vanpaemel, Wolf (3)
Vianello, Michelange ... (3)
Jaeger, Bastian (3)
Inzlicht, Michael (3)
Storage, Daniel (3)
Lin, Hause (3)
Arnal, Jack D. (3)
Babincak, Peter (3)
Chopik, William J. (3)
Kacmar, Pavol (3)
Szaszi, Barnabas (2)
Sullivan, Gavin Bren ... (2)
Stieger, Stefan (2)
Voracek, Martin (2)
Olsen, Jerome (2)
Schei, Vidar (2)
Viganola, Domenico (2)
Peters, Kim (2)
Sirota, Miroslav (2)
Antfolk, Jan (2)
Batres, Carlota (2)
Ijzerman, Hans (2)
DeBruine, Lisa M. (2)
Tamnes, Christian K (2)
Ebersole, Charles R. (2)
Curran, Paul G. (2)
Edlund, John E. (2)
Salamon, Janos (2)
Zrubka, Mark (2)
Schuetz, Astrid (2)
Grahe, Jon E. (2)
Szecsi, Peter (2)
Vaughn, Leigh Ann (2)
Weissgerber, Sophia ... (2)
Lins, Samuel (2)
Corker, Katherine S. (2)
Bloxsom, Nicholas G. (2)
Adamkovic, Matus (2)
Philipp, Michael C. (2)
Pfuhl, Gerit (2)
visa färre...
Lärosäte
Stockholms universitet (2)
Linköpings universitet (2)
Handelshögskolan i Stockholm (2)
Karolinska Institutet (2)
Göteborgs universitet (1)
Kungliga Tekniska Högskolan (1)
visa fler...
Uppsala universitet (1)
Högskolan Väst (1)
Lunds universitet (1)
visa färre...
Språk
Engelska (6)
Forskningsämne (UKÄ/SCB)
Samhällsvetenskap (5)
Naturvetenskap (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy