SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Aczel Balazs) "

Search: WFRF:(Aczel Balazs)

  • Result 1-10 of 12
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Aczel, Balazs, et al. (author)
  • Consensus-based guidance for conducting and reporting multi-analyst studies
  • 2021
  • In: eLIFE. - : eLife Sciences Publications. - 2050-084X. ; 10
  • Journal article (peer-reviewed)abstract
    • Any large dataset can be analyzed in a number of ways, and it is possible that the use of different analysis strategies will lead to different results and conclusions. One way to assess whether the results obtained depend on the analysis strategy chosen is to employ multiple analysts and leave each of them free to follow their own approach. Here, we present consensus-based guidance for conducting and reporting such multi-analyst studies, and we discuss how broader adoption of the multi-analyst approach has the potential to strengthen the robustness of results and conclusions obtained from analyses of datasets in basic and applied research.
  •  
2.
  • Bouwmeester, Sjoerd, et al. (author)
  • Registered Replication Report : Rand, Greene, and Nowak (2012)
  • 2017
  • In: Perspectives on Psychological Science. - : SAGE Publications. - 1745-6916 .- 1745-6924. ; 12:3, s. 527-542
  • Journal article (peer-reviewed)abstract
    • In an anonymous 4-person economic game, participants contributed more money to a common project (i.e., cooperated) when required to decide quickly than when forced to delay their decision (Rand, Greene & Nowak, 2012), a pattern consistent with the social heuristics hypothesis proposed by Rand and colleagues. The results of studies using time pressure have been mixed, with some replication attempts observing similar patterns (e.g., Rand et al., 2014) and others observing null effects (e.g., Tinghög et al., 2013; Verkoeijen & Bouwmeester, 2014). This Registered Replication Report (RRR) assessed the size and variability of the effect of time pressure on cooperative decisions by combining 21 separate, preregistered replications of the critical conditions from Study 7 of the original article (Rand et al., 2012). The primary planned analysis used data from all participants who were randomly assigned to conditions and who met the protocol inclusion criteria (an intent-to-treat approach that included the 65.9% of participants in the time-pressure condition and 7.5% in the forced-delay condition who did not adhere to the time constraints), and we observed a difference in contributions of −0.37 percentage points compared with an 8.6 percentage point difference calculated from the original data. Analyzing the data as the original article did, including data only for participants who complied with the time constraints, the RRR observed a 10.37 percentage point difference in contributions compared with a 15.31 percentage point difference in the original study. In combination, the results of the intent-to-treat analysis and the compliant-only analysis are consistent with the presence of selection biases and the absence of a causal effect of time pressure on cooperation.
  •  
3.
  • Buchanan, Erin M., et al. (author)
  • Getting Started Creating Data Dictionaries : How to Create a Shareable Data Set
  • 2021
  • In: Advances in Methods and Practices in Psychological Science. - : Sage Publications. - 2515-2459 .- 2515-2467. ; 4:1
  • Journal article (peer-reviewed)abstract
    • As researchers embrace open and transparent data sharing, they will need to provide information about their data that effectively helps others understand their data sets’ contents. Without proper documentation, data stored in online repositories such as OSF will often be rendered unfindable and unreadable by other researchers and indexing search engines. Data dictionaries and codebooks provide a wealth of information about variables, data collection, and other important facets of a data set. This information, called metadata, provides key insights into how the data might be further used in research and facilitates search-engine indexing to reach a broader audience of interested parties. This Tutorial first explains terminology and standards relevant to data dictionaries and codebooks. Accompanying information on OSF presents a guided workflow of the entire process from source data (e.g., survey answers on Qualtrics) to an openly shared data set accompanied by a data dictionary or codebook that follows an agreed-upon standard. Finally, we discuss freely available Web applications to assist this process of ensuring that psychology data are findable, accessible, interoperable, and reusable.
  •  
4.
  • Ebersole, Charles R., et al. (author)
  • Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability
  • 2020
  • In: Advances in Methods and Practices in Psychological Science. - : Sage. - 2515-2467 .- 2515-2459. ; 3:3, s. 309-331
  • Journal article (peer-reviewed)abstract
    • Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3-9; median total sample = 1,279.5, range = 276-3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Delta r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00-.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19-.50).
  •  
5.
  • Jones, Benedict C, et al. (author)
  • To which world regions does the valence-dominance model of social perception apply?
  • 2021
  • In: Nature Human Behaviour. - : Springer Science and Business Media LLC. - 2397-3374. ; 5:1, s. 159-169
  • Journal article (peer-reviewed)abstract
    • Over the past 10 years, Oosterhof and Todorov's valence-dominance model has emerged as the most prominent account of how people evaluate faces on social dimensions. In this model, two dimensions (valence and dominance) underpin social judgements of faces. Because this model has primarily been developed and tested in Western regions, it is unclear whether these findings apply to other regions. We addressed this question by replicating Oosterhof and Todorov's methodology across 11 world regions, 41 countries and 11,570 participants. When we used Oosterhof and Todorov's original analysis strategy, the valence-dominance model generalized across regions. When we used an alternative methodology to allow for correlated dimensions, we observed much less generalization. Collectively, these results suggest that, while the valence-dominance model generalizes very well across regions when dimensions are forced to be orthogonal, regional differences are revealed when we use different extraction methods and correlate and rotate the dimension reduction solution. PROTOCOL REGISTRATION: The stage 1 protocol for this Registered Report was accepted in principle on 5 November 2018. The protocol, as accepted by the journal, can be found at https://doi.org/10.6084/m9.figshare.7611443.v1 .
  •  
6.
  • Kekecs, Zoltan, et al. (author)
  • Raising the value of research studies in psychological science by increasing the credibility of research reports : the transparent Psi project
  • 2023
  • In: Royal Society Open Science. - : The Royal Society. - 2054-5703. ; 10:2
  • Journal article (peer-reviewed)abstract
    • The low reproducibility rate in social sciences has produced hesitation among researchers in accepting published findings at their face value. Despite the advent of initiatives to increase transparency in research reporting, the field is still lacking tools to verify the credibility of research reports. In the present paper, we describe methodologies that let researchers craft highly credible research and allow their peers to verify this credibility. We demonstrate the application of these methods in a multi-laboratory replication of Bem's Experiment 1 (Bem 2011 J. Pers. Soc. Psychol. 100, 407-425. (doi:10.1037/a0021524)) on extrasensory perception (ESP), which was co-designed by a consensus panel including both proponents and opponents of Bem's original hypothesis. In the study we applied direct data deposition in combination with born-open data and real-time research reports to extend transparency to protocol delivery and data collection. We also used piloting, checklists, laboratory logs and video-documented trial sessions to ascertain as-intended protocol delivery, and external research auditors to monitor research integrity. We found 49.89% successful guesses, while Bem reported 53.07% success rate, with the chance level being 50%. Thus, Bem's findings were not replicated in our study. In the paper, we discuss the implementation, feasibility and perceived usefulness of the credibility-enhancing methodologies used throughout the project.
  •  
7.
  • McCarthy, Randy J., et al. (author)
  • Registered Replication Report on Srull and Wyer (1979)
  • 2018
  • In: Advances in Methods and Practices in Psychological Science. - : SAGE Publications Inc. - 2515-2459 .- 2515-2467. ; 1:3, s. 321-336
  • Journal article (peer-reviewed)abstract
    • Srull and Wyer (1979) demonstrated that exposing participants to more hostility-related stimuli caused them subsequently to interpret ambiguous behaviors as more hostile. In their Experiment 1, participants descrambled sets of words to form sentences. In one condition, 80% of the descrambled sentences described hostile behaviors, and in another condition, 20% described hostile behaviors. Following the descrambling task, all participants read a vignette about a man named Donald who behaved in an ambiguously hostile manner and then rated him on a set of personality traits. Next, participants rated the hostility of various ambiguously hostile behaviors (all ratings on scales from 0 to 10). Participants who descrambled mostly hostile sentences rated Donald and the ambiguous behaviors as approximately 3 scale points more hostile than did those who descrambled mostly neutral sentences. This Registered Replication Report describes the results of 26 independent replications (N = 7,373 in the total sample; k = 22 labs and N = 5,610 in the primary analyses) of Srull and Wyer?s Experiment 1, each of which followed a preregistered and vetted protocol. A random-effects meta-analysis showed that the protagonist was seen as 0.08 scale points more hostile when participants were primed with 80% hostile sentences than when they were primed with 20% hostile sentences (95% confidence interval, CI = [0.004, 0.16]). The ambiguously hostile behaviors were seen as 0.08 points less hostile when participants were primed with 80% hostile sentences than when they were primed with 20% hostile sentences (95% CI = [?0.18, 0.01]). Although the confidence interval for one outcome excluded zero and the observed effect was in the predicted direction, these results suggest that the currently used methods do not produce an assimilative priming effect that is practically and routinely detectable.
  •  
8.
  • Moshontz, Hannah, et al. (author)
  • The Psychological Science Accelerator: Advancing Psychology Through a Distributed Collaborative Network
  • 2018
  • In: Advances in Methods and Practices in Psychological Science. - : SAGE Publications. - 2515-2459 .- 2515-2467. ; 1:4, s. 501-515
  • Journal article (peer-reviewed)abstract
    • Concerns about the veracity of psychological research have been growing. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions or replicate prior research in large, diverse samples. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time limited), efficient (in that structures and principles are reused for different projects), decentralized, diverse (in both subjects and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside the network). The PSA and other approaches to crowdsourced psychological science will advance understanding of mental processes and behaviors by enabling rigorous research and systematic examination of its generalizability.
  •  
9.
  • ODonnell, Michael, et al. (author)
  • Registered Replication Report: Dijksterhuis and van Knippenberg (1998)
  • 2018
  • In: Perspectives on Psychological Science. - : SAGE PUBLICATIONS LTD. - 1745-6916 .- 1745-6924. ; 13:2, s. 268-294
  • Journal article (peer-reviewed)abstract
    • Dijksterhuis and van Knippenberg (1998) reported that participants primed with a category associated with intelligence (professor) subsequently performed 13% better on a trivia test than participants primed with a category associated with a lack of intelligence (soccer hooligans). In two unpublished replications of this study designed to verify the appropriate testing procedures, Dijksterhuis, van Knippenberg, and Holland observed a smaller difference between conditions (2%-3%) as well as a gender difference: Men showed the effect (9.3% and 7.6%), but women did not (0.3% and -0.3%). The procedure used in those replications served as the basis for this multilab Registered Replication Report. A total of 40 laboratories collected data for this project, and 23 of these laboratories met all inclusion criteria. Here we report the meta-analytic results for those 23 direct replications (total N = 4,493), which tested whether performance on a 30-item general-knowledge trivia task differed between these two priming conditions (results of supplementary analyses of the data from all 40 labs, N = 6,454, are also reported). We observed no overall difference in trivia performance between participants primed with the professor category and those primed with the hooligan category (0.14%) and no moderation by gender.
  •  
10.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 12
Type of publication
journal article (11)
other publication (1)
Type of content
peer-reviewed (11)
other academic/artistic (1)
Author/Editor
Aczel, Balazs (12)
Szaszi, Barnabas (7)
Vanpaemel, Wolf (6)
Holzmeister, Felix (4)
Johannesson, Magnus (4)
Kirchler, Michael (4)
show more...
Chartier, Christophe ... (4)
Miller, Jeremy K. (4)
Schmidt, Kathleen (4)
Voracek, Martin (4)
Huber, Juergen (3)
Levitan, Carmel A. (3)
Vianello, Michelange ... (3)
Tinghög, Gustav, 197 ... (3)
Västfjäll, Daniel, 1 ... (3)
Kekecs, Zoltan (3)
Nilsonne, Gustav (2)
van den Akker, Olmo ... (2)
Albers, Casper J. (2)
Botvinik-Nezer, Rote ... (2)
Busch, Niko A. (2)
Cataldo, Andrea M. (2)
van Dongen, Noah N. ... (2)
Hoekstra, Rink (2)
Hoffmann, Sabine (2)
Mangin, Jean-Francoi ... (2)
Matzke, Dora (2)
Newell, Ben R. (2)
Nosek, Brian A. (2)
van Ravenzwaaij, Don (2)
Sarafoglou, Alexandr ... (2)
Schweinsberg, Martin (2)
Simons, Daniel J. (2)
Spellman, Barbara A. (2)
Wicherts, Jelte (2)
Wagenmakers, Eric-Ja ... (2)
Sullivan, Gavin Bren ... (2)
Stieger, Stefan (2)
Yamada, Yuki (2)
Olsen, Jerome (2)
Schei, Vidar (2)
Dreber, Anna (2)
Isager, Peder M. (2)
Palfi, Bence (2)
Szollosi, Aba (2)
Jaeger, Bastian (2)
Roets, Arne (2)
Mechtel, Mario (2)
Peters, Kim (2)
Scopelliti, Irene (2)
show less...
University
Lund University (4)
Stockholm School of Economics (4)
Linköping University (3)
University of Gothenburg (2)
Stockholm University (2)
University West (1)
show more...
Linnaeus University (1)
Karolinska Institutet (1)
show less...
Language
English (12)
Research subject (UKÄ/SCB)
Social Sciences (11)
Natural sciences (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view