SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Viganola Domenico) "

Sökning: WFRF:(Viganola Domenico)

  • Resultat 1-10 av 14
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Bishop, Michael, et al. (författare)
  • Are replication rates the same across academic fields? Community forecasts from the DARPA SCORE programme
  • 2020
  • Ingår i: Royal Society Open Science. - : Royal Society, The: Open Access / Royal Society. - 2054-5703. ; 7:7
  • Tidskriftsartikel (refereegranskat)abstract
    • The Defense Advanced Research Projects Agency (DARPA) programme 'Systematizing Confidence in Open Research and Evidence' (SCORE) aims to generate confidence scores for a large number of research claims from empirical studies in the social and behavioural sciences. The confidence scores will provide a quantitative assessment of how likely a claim will hold up in an independent replication. To create the scores, we follow earlier approaches and use prediction markets and surveys to forecast replication outcomes. Based on an initial set of forecasts for the overall replication rate in SCORE and its dependence on the academic discipline and the time of publication, we show that participants expect replication rates to increase over time. Moreover, they expect replication rates to differ between fields, with the highest replication rate in economics (average survey response 58%), and the lowest in psychology and in education (average survey response of 42% for both fields). These results reveal insights into the academic community's views of the replication crisis, including for research fields for which no large-scale replication studies have been undertaken yet.
  •  
2.
  • Buckles, Grant, et al. (författare)
  • Using prediction markets to predict the outcomes in the Defense Advanced Research Projects Agency’s next-generation social science programme
  • 2021
  • Ingår i: Royal Society Open Science. - : Royal Society, The: Open Access / Royal Society. - 2054-5703 .- 2054-5703. ; 8:7, s. 181308-181308
  • Tidskriftsartikel (refereegranskat)abstract
    • There is evidence that prediction markets are useful tools to aggregate information on researchers’ beliefs about scientific results including the outcome of replications. In this study, we use prediction markets to forecast the results of novel experimental designs that test established theories. We set up prediction markets for hypotheses tested in the Defense Advanced Research Projects Agency’s (DARPA) Next Generation Social Science (NGS2) programme. Researchers were invited to bet on whether 22 hypotheses would be supported or not. We define support as a test result in the same direction as hypothesized, with a Bayes factor of at least 10 (i.e. a likelihood of the observed data being consistent with the tested hypothesis that is at least 10 times greater compared with the null hypothesis). In addition to betting on this binary outcome, we asked participants to bet on the expected effect size (in Cohen’s d) for each hypothesis. Our goal was to recruit at least 50 participants that signed up to participate in these markets. While this was the case, only 39 participants ended up actually trading. Participants also completed a survey on both the binary result and the effect size. We find that neither prediction markets nor surveys performed well in predicting outcomes for NGS2.
  •  
3.
  • Dreber Almenberg, Anna, et al. (författare)
  • Is research in social psychology politically biased? Systematic empirical tests and a forecasting survey to address the controversy
  • 2018
  • Ingår i: Journal of Experimental Social Psychology. - : Elsevier. - 1096-0465 .- 0022-1031. ; 79:november, s. 188-199
  • Tidskriftsartikel (refereegranskat)abstract
    • The present investigation provides the first systematic empirical tests for the role of politics in academic research. In a large sample of scientific abstracts from the field of social psychology, we find both evaluative differences, such that conservatives are described more negatively than liberals, and explanatory differences, such that conservatism is more likely to be the focus of explanation than liberalism. In light of the ongoing debate about politicized science, a forecasting survey permitted scientists to state a priori empirical predictions about the results, and then change their beliefs in light of the evidence. Participating scientists accurately predicted the direction of both the evaluative and explanatory differences, but at the same time significantly overestimated both effect sizes. Scientists also updated their broader beliefs about political bias in response to the empirical results, providing a model for addressing divisive scientific controversies across fields.
  •  
4.
  • Ebersole, Charles R., et al. (författare)
  • Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability
  • 2020
  • Ingår i: Advances in Methods and Practices in Psychological Science. - : Sage. - 2515-2467 .- 2515-2459. ; 3:3, s. 309-331
  • Tidskriftsartikel (refereegranskat)abstract
    • Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3-9; median total sample = 1,279.5, range = 276-3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Delta r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00-.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19-.50).
  •  
5.
  • Forsell, Eskil, et al. (författare)
  • Predicting replication outcomes in the Many Labs 2 study
  • 2019
  • Ingår i: Journal of Economic Psychology. - : Elsevier. - 1872-7719 .- 0167-4870. ; 75:Part A SI
  • Tidskriftsartikel (refereegranskat)abstract
    • Understanding and improving reproducibility is crucial for scientific progress. Prediction markets and related methods of eliciting peer beliefs are promising tools to predict replication outcomes. We invited researchers in the field of psychology to judge the replicability of 24 studies replicated in the large scale Many Labs 2 project. We elicited peer beliefs in prediction markets and surveys about two replication success metrics: the probability that the replication yields a statistically significant effect in the original direction (p < 0.001), and the relative effect size of the replication. The prediction markets correctly predicted 75% of the replication outcomes, and were highly correlated with the replication outcomes. Survey beliefs were also significantly correlated with replication outcomes, but had larger prediction errors. The prediction markets for relative effect sizes attracted little trading and thus did not work well. The survey beliefs about relative effect sizes performed better and were significantly correlated with observed relative effect sizes. The results suggest that replication outcomes can be predicted and that the elicitation of peer beliefs can increase our knowledge about scientific reproducibility and the dynamics of hypothesis testing.
  •  
6.
  •  
7.
  • Johannesson, Magnus, et al. (författare)
  • Predicting replicability-Analysis of survey and prediction market data from large-scale forecasting projects
  • 2021
  • Ingår i: PLoS ONE. - : Public Library of Science. - 1932-6203 .- 1932-6203. ; 16:4
  • Tidskriftsartikel (refereegranskat)abstract
    • The reproducibility of published research has become an important topic in science policy. A number of large-scale replication projects have been conducted to gauge the overall reproducibility in specific academic fields. Here, we present an analysis of data from four studies which sought to forecast the outcomes of replication projects in the social and behavioural sciences, using human experts who participated in prediction markets and answered surveys. Because the number of findings replicated and predicted in each individual study was small, pooling the data offers an opportunity to evaluate hypotheses regarding the performance of prediction markets and surveys at a higher power. In total, peer beliefs were elicited for the replication outcomes of 103 published findings. We find there is information within the scientific community about the replicability of scientific findings, and that both surveys and prediction markets can be used to elicit and aggregate this information. Our results show prediction markets can determine the outcomes of direct replications with 73% accuracy (n = 103). Both the prediction market prices, and the average survey responses are correlated with outcomes (0.581 and 0.564 respectively, both p < .001). We also found a significant relationship between p-values of the original findings and replication outcomes. The dataset is made available through the R package "pooledmaRket"and can be used to further study community beliefs towards replications outcomes as elicited in the surveys and prediction markets. © 2021 Gordon et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
  •  
8.
  • Schweinsberg, Martin, et al. (författare)
  • Same data, different conclusions : Radical dispersion in empirical results when independent analysts operationalize and test the same hypothesis
  • 2021
  • Ingår i: Organizational Behavior and Human Decision Processes. - : Elsevier BV. - 0749-5978 .- 1095-9920. ; 165, s. 228-249
  • Tidskriftsartikel (refereegranskat)abstract
    • In this crowdsourced initiative, independent analysts used the same dataset to test two hypotheses regarding the effects of scientists' gender and professional status on verbosity during group meetings. Not only the analytic approach but also the operationalizations of key variables were left unconstrained and up to individual analysts. For instance, analysts could choose to operationalize status as job title, institutional ranking, citation counts, or some combination. To maximize transparency regarding the process by which analytic choices are made, the analysts used a platform we developed called DataExplained to justify both preferred and rejected analytic paths in real time. Analyses lacking sufficient detail, reproducible code, or with statistical errors were excluded, resulting in 29 analyses in the final sample. Researchers reported radically different analyses and dispersed empirical outcomes, in a number of cases obtaining significant effects in opposite directions for the same research question. A Boba multiverse analysis demonstrates that decisions about how to operationalize variables explain variability in outcomes above and beyond statistical choices (e.g., covariates). Subjective researcher decisions play a critical role in driving the reported empirical results, underscoring the need for open data, systematic robustness checks, and transparency regarding both analytic paths taken and not taken. Implications for orga-nizations and leaders, whose decision making relies in part on scientific findings, consulting reports, and internal analyses by data scientists, are discussed.
  •  
9.
  • Tierney, Warren, et al. (författare)
  • Creative destruction in science
  • 2020
  • Ingår i: Organizational Behavior and Human Decision Processes. - : Elsevier BV. - 0749-5978 .- 1095-9920. ; 161, s. 291-309
  • Tidskriftsartikel (refereegranskat)abstract
    • Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents’ reasoning about day care options, and gender discrimination in hiring decisions.Significance statementIt is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void — reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building.Scientific transparency statementThe materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article.
  •  
10.
  • Viganola, Domenico, et al. (författare)
  • Datasets from a research project examining the role of politics in social psychological research
  • 2018
  • Ingår i: Scientific Data. - : Nature Research (part of Springer Nature): Fully open access journals / Nature Publishing Group. - 2052-4463. ; 5
  • Tidskriftsartikel (refereegranskat)abstract
    • We present four datasets from a project examining the role of politics in social psychological research. These include thousands of independent raters who coded scientific abstracts for political relevance and for whether conservatives or liberals were treated as targets of explanation and characterized in a negative light. Further included are predictions about the empirical results by scientists participating in a forecasting survey, and coded publication outcomes for unpublished research projects varying in political overtones. Future researchers can leverage this corpus to test further hypotheses regarding political values and scientific research, perceptions of political bias, publication histories, and forecasting accuracy.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 14
Typ av publikation
tidskriftsartikel (9)
annan publikation (4)
doktorsavhandling (1)
Typ av innehåll
refereegranskat (9)
övrigt vetenskapligt/konstnärligt (5)
Författare/redaktör
Viganola, Domenico (14)
Dreber Almenberg, An ... (11)
Johannesson, Magnus (11)
Pfeiffer, Thomas (11)
Uhlmann, Eric Luis (7)
Chen, Yiling (4)
visa fler...
Nosek, Brian A. (3)
Gordon, Michael (3)
Liu, Yang (2)
Nilsonne, Gustav (2)
Aczel, Balazs (1)
Szaszi, Barnabas (1)
van den Akker, Olmo ... (1)
Holzmeister, Felix (1)
Schweinsberg, Martin (1)
Silberzahn, Raphael (1)
Sullivan, Gavin Bren ... (1)
Danielsson, Henrik, ... (1)
Miller, David (1)
Almenberg, Johan (1)
Forsell, Eskil (1)
Nave, Gideon (1)
Robinson, David (1)
Bahník, Štěpán (1)
Chartier, Christophe ... (1)
Fedor, Anna (1)
Frank, Michael C. (1)
Hartshorne, Joshua K ... (1)
Levitan, Carmel A. (1)
Miller, Jeremy K. (1)
Schmidt, Kathleen (1)
van Aert, Robbie C. ... (1)
van Assen, Marcel A. ... (1)
Vanpaemel, Wolf (1)
Vianello, Michelange ... (1)
Villeseche, Florence (1)
Zandian, Arash (1)
Bialek, Michal (1)
Muda, Rafal (1)
Clark, Michael (1)
Wolf, Daniel (1)
Dreber, Anna (1)
Bernstein, Michael H (1)
Bishop, Michael (1)
Goldfedder, Brandon (1)
Twardy, Charles (1)
Wang, Juntao (1)
Kane, David (1)
Gnambs, Timo (1)
Kauff, Mathias (1)
visa färre...
Lärosäte
Handelshögskolan i Stockholm (14)
Stockholms universitet (2)
Karolinska Institutet (2)
Kungliga Tekniska Högskolan (1)
Linköpings universitet (1)
Språk
Engelska (14)
Forskningsämne (UKÄ/SCB)
Samhällsvetenskap (14)
Naturvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy