SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "L773:1554 3528 srt2:(2015-2019)"

Search: L773:1554 3528 > (2015-2019)

  • Result 1-10 of 23
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Leppänen, Jukka M, et al. (author)
  • Widely applicable MATLAB routines for automated analysis of saccadic reaction times
  • 2015
  • In: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-351X .- 1554-3528. ; 47:2, s. 538-548
  • Journal article (peer-reviewed)abstract
    • Saccadic reaction time (SRT) is a widely used dependent variable in eye-tracking studies of human cognition and its disorders. SRTs are also frequently measured in studies with special populations, such as infants and young children, who are limited in their ability to follow verbal instructions and remain in a stable position over time. In this article, we describe a library of MATLAB routines (Mathworks, Natick, MA) that are designed to (1) enable completely automated implementation of SRT analysis for multiple data sets and (2) cope with the unique challenges of analyzing SRTs from eye-tracking data collected from poorly cooperating participants. The library includes preprocessing and SRT analysis routines. The preprocessing routines (i.e., moving median filter and interpolation) are designed to remove technical artifacts and missing samples from raw eye-tracking data. The SRTs are detected by a simple algorithm that identifies the last point of gaze in the area of interest, but, critically, the extracted SRTs are further subjected to a number of postanalysis verification checks to exclude values contaminated by artifacts. Example analyses of data from 5- to 11-month-old infants demonstrated that SRTs extracted with the proposed routines were in high agreement with SRTs obtained manually from video records, robust against potential sources of artifact, and exhibited moderate to high test-retest stability. We propose that the present library has wide utility in standardizing and automating SRT-based cognitive testing in various populations. The MATLAB routines are open source and can be downloaded from http://www.uta.fi/med/icl/methods.html .
  •  
2.
  • Łuniewska, Magdalena, et al. (author)
  • Ratings of age of acquisition of 299 words across 25 languages : is there a cross-linguistic order of words?
  • 2016
  • In: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528 .- 1554-351X. ; 48:3, s. 1154-1177
  • Journal article (peer-reviewed)abstract
    • We present a new set of subjective age-of-acquisition (AoA) ratings for 299 words (158 nouns, 141 verbs) in 25 languages from five language families (Afro-Asiatic: Semitic languages; Altaic: one Turkic language: Indo-European: Baltic, Celtic, Germanic, Hellenic, Slavic, and Romance languages; Niger-Congo: one Bantu language; Uralic: Finnic and Ugric languages). Adult native speakers reported the age at which they had learned each word. We present a comparison of the AoA ratings across all languages by contrasting them in pairs. This comparison shows a consistency in the orders of ratings across the 25 languages. The data were then analyzed (1) to ascertain how the demographic characteristics of the participants influenced AoA estimations and (2) to assess differences caused by the exact form of the target question (when did you learn vs. when do children learn this word); (3) to compare the ratings obtained in our study to those of previous studies; and (4) to assess the validity of our study by comparison with quasi-objective AoA norms derived from the MacArthur–Bates Communicative Development Inventories (MB-CDI). All 299 words were judged as being acquired early (mostly before the age of 6 years). AoA ratings were associated with the raters’ social or language status, but not with the raters’ age or education. Parents reported words as being learned earlier, and bilinguals reported learning them later. Estimations of the age at which children learn the words revealed significantly lower ratings of AoA. Finally, comparisons with previous AoA and MB-CDI norms support the validity of the present estimations. Our AoA ratings are available for research or other purposes.
  •  
3.
  • Nyström, Pär, et al. (author)
  • The TimeStudio Project : An open source scientific workflow system for the behavioral and brain sciences
  • 2016
  • In: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-351X .- 1554-3528. ; 48:2, s. 542-552
  • Journal article (peer-reviewed)abstract
    • This article describes a new open source scientific workflow system, the TimeStudio Project, dedicated to the behavioral and brain sciences. The program is written in MATLAB and features a graphical user interface for the dynamic pipelining of computer algorithms developed as TimeStudio plugins. TimeStudio includes both a set of general plugins (for reading data files, modifying data structures, visualizing data structures, etc.) and a set of plugins specifically developed for the analysis of event-related eyetracking data as a proof of concept. It is possible to create custom plugins to integrate new or existing MATLAB code anywhere in a workflow, making TimeStudio a flexible workbench for organizing and performing a wide range of analyses. The system also features an integrated sharing and archiving tool for TimeStudio workflows, which can be used to share workflows both during the data analysis phase and after scientific publication. TimeStudio thus facilitates the reproduction and replication of scientific studies, increases the transparency of analyses, and reduces individual researchers' analysis workload. The project website ( http://timestudioproject.com ) contains the latest releases of TimeStudio, together with documentation and user forums.
  •  
4.
  • Rofes, Adrìa, et al. (author)
  • Imageability ratings across languages
  • 2018
  • In: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-351X .- 1554-3528. ; 50:3, s. 1187-1197
  • Journal article (peer-reviewed)abstract
    • Imageability is a psycholinguistic variable that indicates how well a word gives rise to a mental image or sensory experience. Imageability ratings are used extensively in psycholinguistic, neuropsychological, and aphasiological studies. However, little formal knowledge exists about whether and how these ratings are associated between and within languages. Fifteen imageability databases were cross-correlated using nonparametric statistics. Some of these corresponded to unpublished data collected within a European research network—the Collaboration of Aphasia Trialists (COST IS1208). All but four of the correlations were significant. The average strength of the correlations (rho = .68) and the variance explained (R2 = 46%) were moderate. This implies that factors other than imageability may explain 54% of the results. Imageability ratings often correlate across languages. Different possibly interacting factors may explain the moderate strength and variance explained in the correlations: (1) linguistic and cultural factors; (2) intrinsic differences between the databases; (3) range effects; (4) small numbers of words in each database, equivalent words, and participants; and (5) mean age of the participants. The results suggest that imageability ratings may be used cross-linguistically. However, further understanding of the factors explaining the variance in the correlations will be needed before research and practical recommendations can be made.
  •  
5.
  • Torrance, Mark, et al. (author)
  • Timed written picture naming in 14 European languages
  • 2018
  • In: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-351X .- 1554-3528. ; 50:2, s. 744-758
  • Journal article (peer-reviewed)abstract
    • © 2017 The Author(s)We describe the Multilanguage Written Picture Naming Dataset. This gives trial-level data and time and agreement norms for written naming of the 260 pictures of everyday objects that compose the colorized Snodgrass and Vanderwart picture set (Rossion & Pourtois in Perception, 33, 217–236, 2004). Adult participants gave keyboarded responses in their first language under controlled experimental conditions (N = 1,274, with subsamples responding in Bulgarian, Dutch, English, Finnish, French, German, Greek, Icelandic, Italian, Norwegian, Portuguese, Russian, Spanish, and Swedish). We measured the time to initiate a response (RT) and interkeypress intervals, and calculated measures of name and spelling agreement. There was a tendency across all languages for quicker RTs to pictures with higher familiarity, image agreement, and name frequency, and with higher name agreement. Effects of spelling agreement and effects on output rates after writing onset were present in some, but not all, languages. Written naming therefore shows name retrieval effects that are similar to those found in speech, but our findings suggest the need for cross-language comparisons as we seek to understand the orthographic retrieval and/or assembly processes that are specific to written output.
  •  
6.
  • Andersson, Richard, et al. (author)
  • One algorithm to rule them all? : An evaluation and discussion of ten eye movement event-detection algorithms
  • 2017
  • In: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 49:2, s. 616-637
  • Journal article (peer-reviewed)abstract
    • Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484–2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.
  •  
7.
  • Anikin, Andrey, et al. (author)
  • Nonlinguistic vocalizations from online amateur videos for emotion research : A validated corpus
  • 2017
  • In: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 49:2, s. 758-771
  • Journal article (peer-reviewed)abstract
    • This study introduces a corpus of 260 naturalistic human nonlinguistic vocalizations representing nine emotions: amusement, anger, disgust, effort, fear, joy, pain, pleasure, and sadness. The recognition accuracy in a rating task varied greatly per emotion, from <40% for joy and pain, to >70% for amusement, pleasure, fear, and sadness. In contrast, the raters’ linguistic–cultural group had no effect on recognition accuracy: The predominantly English-language corpus was classified with similar accuracies by participants from Brazil, Russia, Sweden, and the UK/USA. Supervised random forest models classified the sounds as accurately as the human raters. The best acoustic predictors of emotion were pitch, harmonicity, and the spacing and regularity of syllables. This corpus of ecologically valid emotional vocalizations can be filtered to include only sounds with high recognition rates, in order to study reactions to emotional stimuli of known perceptual types (reception side), or can be used in its entirety to study the association between affective states and vocal expressions (production side).
  •  
8.
  • Anikin, Andrey (author)
  • Soundgen : An open-source tool for synthesizing nonverbal vocalizations
  • 2019
  • In: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 51:2, s. 778-792
  • Journal article (peer-reviewed)abstract
    • Voice synthesis is a useful method for investigating the communicative role of different acoustic features. Although many text-to-speech systems are available, researchers of human nonverbal vocalizations and bioacousticians may profit from a dedicated simple tool for synthesizing and manipulating natural-sounding vocalizations. Soundgen (https://CRAN.R-project.org/package=soundgen) is an open-source R package that synthesizes nonverbal vocalizations based on meaningful acoustic parameters, which can be specified from the command line or in an interactive app. This tool was validated by comparing the perceived emotion, valence, arousal, and authenticity of 60 recorded human nonverbal vocalizations (screams, moans, laughs, and so on) and their approximate synthetic reproductions. Each synthetic sound was created by manually specifying only a small number of high-level control parameters, such as syllable length and a few anchors for the intonation contour. Nevertheless, the valence and arousal ratings of synthetic sounds were similar to those of the original recordings, and the authenticity ratings were comparable, maintaining parity with the originals for less complex vocalizations. Manipulating the precise acoustic characteristics of synthetic sounds may shed light on the salient predictors of emotion in the human voice. More generally, soundgen may prove useful for any studies that require precise control over the acoustic features of nonspeech sounds, including research on animal vocalizations and auditory perception.
  •  
9.
  •  
10.
  • Bååth, Rasmus (author)
  • Estimating the distribution of sensorimotor synchronization data : A Bayesian hierarchical modeling approach.
  • 2015
  • In: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528.
  • Journal article (peer-reviewed)abstract
    • The sensorimotor synchronization paradigm is used when studying the coordination of rhythmic motor responses with a pacing stimulus and is an important paradigm in the study of human timing and time perception. Two measures of performance frequently calculated using sensorimotor synchronization data are the average offset and variability of the stimulus-to-response asynchronies-the offsets between the stimuli and the motor responses. Here it is shown that assuming that asynchronies are normally distributed when estimating these measures can result in considerable underestimation of both the average offset and variability. This is due to a tendency for the distribution of the asynchronies to be bimodal and left skewed when the interstimulus interval is longer than 2 s. It is argued that (1) this asymmetry is the result of the distribution of the asynchronies being a mixture of two types of responses-predictive and reactive-and (2) the main interest in a sensorimotor synchronization study is the predictive responses. A Bayesian hierarchical modeling approach is proposed in which sensorimotor synchronization data are modeled as coming from a right-censored normal distribution that effectively separates the predictive responses from the reactive responses. Evaluation using both simulated data and experimental data from a study by Repp and Doggett (2007) showed that the proposed approach produces more precise estimates of the average offset and variability, with considerably less underestimation.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 23

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view