SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Szaszi Barnabas) "

Sökning: WFRF:(Szaszi Barnabas)

  • Resultat 1-8 av 8
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Aczel, Balazs, et al. (författare)
  • Consensus-based guidance for conducting and reporting multi-analyst studies
  • 2021
  • Ingår i: eLIFE. - : eLife Sciences Publications. - 2050-084X. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • Any large dataset can be analyzed in a number of ways, and it is possible that the use of different analysis strategies will lead to different results and conclusions. One way to assess whether the results obtained depend on the analysis strategy chosen is to employ multiple analysts and leave each of them free to follow their own approach. Here, we present consensus-based guidance for conducting and reporting such multi-analyst studies, and we discuss how broader adoption of the multi-analyst approach has the potential to strengthen the robustness of results and conclusions obtained from analyses of datasets in basic and applied research.
  •  
2.
  • Bouwmeester, Sjoerd, et al. (författare)
  • Registered Replication Report : Rand, Greene, and Nowak (2012)
  • 2017
  • Ingår i: Perspectives on Psychological Science. - : SAGE Publications. - 1745-6916 .- 1745-6924. ; 12:3, s. 527-542
  • Tidskriftsartikel (refereegranskat)abstract
    • In an anonymous 4-person economic game, participants contributed more money to a common project (i.e., cooperated) when required to decide quickly than when forced to delay their decision (Rand, Greene & Nowak, 2012), a pattern consistent with the social heuristics hypothesis proposed by Rand and colleagues. The results of studies using time pressure have been mixed, with some replication attempts observing similar patterns (e.g., Rand et al., 2014) and others observing null effects (e.g., Tinghög et al., 2013; Verkoeijen & Bouwmeester, 2014). This Registered Replication Report (RRR) assessed the size and variability of the effect of time pressure on cooperative decisions by combining 21 separate, preregistered replications of the critical conditions from Study 7 of the original article (Rand et al., 2012). The primary planned analysis used data from all participants who were randomly assigned to conditions and who met the protocol inclusion criteria (an intent-to-treat approach that included the 65.9% of participants in the time-pressure condition and 7.5% in the forced-delay condition who did not adhere to the time constraints), and we observed a difference in contributions of −0.37 percentage points compared with an 8.6 percentage point difference calculated from the original data. Analyzing the data as the original article did, including data only for participants who complied with the time constraints, the RRR observed a 10.37 percentage point difference in contributions compared with a 15.31 percentage point difference in the original study. In combination, the results of the intent-to-treat analysis and the compliant-only analysis are consistent with the presence of selection biases and the absence of a causal effect of time pressure on cooperation.
  •  
3.
  • Ebersole, Charles R., et al. (författare)
  • Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability
  • 2020
  • Ingår i: Advances in Methods and Practices in Psychological Science. - : Sage. - 2515-2467 .- 2515-2459. ; 3:3, s. 309-331
  • Tidskriftsartikel (refereegranskat)abstract
    • Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3-9; median total sample = 1,279.5, range = 276-3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Delta r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00-.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19-.50).
  •  
4.
  • Kekecs, Zoltan, et al. (författare)
  • Raising the value of research studies in psychological science by increasing the credibility of research reports : the transparent Psi project
  • 2023
  • Ingår i: Royal Society Open Science. - : The Royal Society. - 2054-5703. ; 10:2
  • Tidskriftsartikel (refereegranskat)abstract
    • The low reproducibility rate in social sciences has produced hesitation among researchers in accepting published findings at their face value. Despite the advent of initiatives to increase transparency in research reporting, the field is still lacking tools to verify the credibility of research reports. In the present paper, we describe methodologies that let researchers craft highly credible research and allow their peers to verify this credibility. We demonstrate the application of these methods in a multi-laboratory replication of Bem's Experiment 1 (Bem 2011 J. Pers. Soc. Psychol. 100, 407-425. (doi:10.1037/a0021524)) on extrasensory perception (ESP), which was co-designed by a consensus panel including both proponents and opponents of Bem's original hypothesis. In the study we applied direct data deposition in combination with born-open data and real-time research reports to extend transparency to protocol delivery and data collection. We also used piloting, checklists, laboratory logs and video-documented trial sessions to ascertain as-intended protocol delivery, and external research auditors to monitor research integrity. We found 49.89% successful guesses, while Bem reported 53.07% success rate, with the chance level being 50%. Thus, Bem's findings were not replicated in our study. In the paper, we discuss the implementation, feasibility and perceived usefulness of the credibility-enhancing methodologies used throughout the project.
  •  
5.
  • Menkveld, Albert J., et al. (författare)
  • Nonstandard Errors
  • 2024
  • Ingår i: JOURNAL OF FINANCE. - : Wiley-Blackwell. - 0022-1082 .- 1540-6261. ; 79:3, s. 2339-2390
  • Tidskriftsartikel (refereegranskat)abstract
    • In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty-nonstandard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for more reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants.
  •  
6.
  • ODonnell, Michael, et al. (författare)
  • Registered Replication Report: Dijksterhuis and van Knippenberg (1998)
  • 2018
  • Ingår i: Perspectives on Psychological Science. - : SAGE PUBLICATIONS LTD. - 1745-6916 .- 1745-6924. ; 13:2, s. 268-294
  • Tidskriftsartikel (refereegranskat)abstract
    • Dijksterhuis and van Knippenberg (1998) reported that participants primed with a category associated with intelligence (professor) subsequently performed 13% better on a trivia test than participants primed with a category associated with a lack of intelligence (soccer hooligans). In two unpublished replications of this study designed to verify the appropriate testing procedures, Dijksterhuis, van Knippenberg, and Holland observed a smaller difference between conditions (2%-3%) as well as a gender difference: Men showed the effect (9.3% and 7.6%), but women did not (0.3% and -0.3%). The procedure used in those replications served as the basis for this multilab Registered Replication Report. A total of 40 laboratories collected data for this project, and 23 of these laboratories met all inclusion criteria. Here we report the meta-analytic results for those 23 direct replications (total N = 4,493), which tested whether performance on a 30-item general-knowledge trivia task differed between these two priming conditions (results of supplementary analyses of the data from all 40 labs, N = 6,454, are also reported). We observed no overall difference in trivia performance between participants primed with the professor category and those primed with the hooligan category (0.14%) and no moderation by gender.
  •  
7.
  •  
8.
  • Uhlmann, Eric, L., et al. (författare)
  • Subjective Evidence Evaluation Survey For Multi-Analyst Studies
  • 2024
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Multi-analyst studies explore how well an empirical claim withstands plausible alternative analyses of the same data set by multiple, independent analysis teams. Conclusions from these studies typically rely on a single outcome metric (e.g., effect size) provided by each analysis team. Although informative about the range of plausible effects in a data set, a single effect size from each team does not provide a complete, nuanced understanding of how analysis choices are related to the outcome. We used the Delphi consensus technique with input from 37 experts to develop an 18-item Subjective Evidence Evaluation Survey (SEES) to evaluate how each analysis team views the methodological appropriateness of the research design and the strength of evidence for the hypothesis. We illustrate the usefulness of the SEES in providing richer evidence assessment with pilot data from a previous multi-analyst study.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-8 av 8
Typ av publikation
tidskriftsartikel (7)
annan publikation (1)
Typ av innehåll
refereegranskat (7)
övrigt vetenskapligt/konstnärligt (1)
Författare/redaktör
Szaszi, Barnabas (8)
Aczel, Balazs (7)
Johannesson, Magnus (5)
Holzmeister, Felix (3)
Kirchler, Michael (3)
Zrubka, Mark (3)
visa fler...
Kovacs, Marton (3)
Nilsonne, Gustav (2)
van den Akker, Olmo ... (2)
Albers, Casper J. (2)
Botvinik-Nezer, Rote ... (2)
Busch, Niko A. (2)
Cataldo, Andrea M. (2)
van Dongen, Noah N. ... (2)
Dreber Almenberg, An ... (2)
Hoekstra, Rink (2)
Hoffmann, Sabine (2)
Huber, Juergen (2)
Mangin, Jean-Francoi ... (2)
Matzke, Dora (2)
Newell, Ben R. (2)
Nosek, Brian A. (2)
van Ravenzwaaij, Don (2)
Sarafoglou, Alexandr ... (2)
Schweinsberg, Martin (2)
Simons, Daniel J. (2)
Spellman, Barbara A. (2)
Wicherts, Jelte (2)
Wagenmakers, Eric-Ja ... (2)
Chartier, Christophe ... (2)
Miller, Jeremy K. (2)
Schmidt, Kathleen (2)
Vanpaemel, Wolf (2)
Yamada, Yuki (2)
Kekecs, Zoltan (2)
Dreber, Anna (2)
Palfi, Bence (2)
Szollosi, Aba (2)
Inzlicht, Michael (2)
Edlund, John E. (2)
Saunders, Blair (2)
Salamon, Janos (2)
Szecsi, Peter (2)
Tuerlinckx, Francis (2)
Ropovik, Ivan (2)
Babincak, Peter (2)
Bakos, Bence E. (2)
Banik, Gabriel (2)
Baskin, Ernest (2)
Menkveld, Albert J. (2)
visa färre...
Lärosäte
Handelshögskolan i Stockholm (5)
Lunds universitet (3)
Göteborgs universitet (2)
Stockholms universitet (2)
Linköpings universitet (1)
Karolinska Institutet (1)
Språk
Engelska (8)
Forskningsämne (UKÄ/SCB)
Samhällsvetenskap (7)
Naturvetenskap (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy