SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:1554 3528 "

Sökning: L773:1554 3528

  • Resultat 1-50 av 65
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ashton, Stephanie, et al. (författare)
  • The Index of Intrusion Control (IIC) : Capturing individual variability in intentional intrusion control in the laboratory
  • 2024
  • Ingår i: Behavior Research Methods. - : Springer Nature. - 1554-351X .- 1554-3528. ; 56:4, s. 4061-4072
  • Tidskriftsartikel (refereegranskat)abstract
    • Intrusive memories can be downregulated using intentional memory control, as measured via the Think/No-Think paradigm. In this task, participants retrieve or suppress memories in response to an associated reminder cue. After each suppression trial, participants rate whether the association intruded into awareness. Previous research has found that repeatedly exerting intentional control over memory intrusions reduces their frequency. This decrease is often summarised with a linear index, which may miss more complex patterns characterising the temporal dynamics of intrusion control. The goal of this paper is to propose a novel metric of intrusion control that captures those dynamic changes over time as a single index. Results from a mega-analysis of published datasets revealed that the change in intrusion frequencies across time is not purely linear, but also includes non-linear dynamics that seem best captured by a log function of the number of suppression attempts. To capture those linear and non-linear dynamics, we propose the Index of Intrusion Control (IIC), which relies on the integral of intrusion changes across suppression attempts. Simulations revealed that the IIC best captured the linear and non-linear dynamics of intrusion suppression when compared with other linear or non-linear indexes of control, such as the regression slope or Spearman correlation, respectively. Our findings demonstrate how the IIC may therefore act as a more reliable metric to capture individual differences in intrusion control, and examine the role of non-linear dynamics characterizing the conscious access to unwanted memories.
  •  
2.
  • Buatois, Alexis, 1989, et al. (författare)
  • A simple semi-automated home-tank method and procedure to explore classical associative learning in adult zebrafish
  • 2024
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-351X .- 1554-3528. ; 56, s. 736-749
  • Tidskriftsartikel (refereegranskat)abstract
    • The zebrafish is a laboratory species that gained increasing popularity the last decade in a variety of subfields of biology, including toxicology, ecology, medicine, and the neurosciences. An important phenotype often measured in these fields is behaviour. Consequently, numerous new behavioural apparati and paradigms have been developed for the zebrafish, including methods for the analysis of learning and memory in adult zebrafish. Perhaps the biggest obstacle in these methods is that zebrafish is particularly sensitive to human handling. To overcome this confound, automated learning paradigms have been developed with varying success. In this manuscript, we present a semi-automated home tank-based learning/memory test paradigm utilizing visual cues, and show that it is capable of quantifying classical associative learning performance in zebrafish. We demonstrate that in this task, zebrafish successfully acquire the association between coloured-light and food reward. The hardware and software components of the task are easy and cheap to obtain and simple to assemble and set up. The procedures of the paradigm allow the test fish to remain completely undisturbed by the experimenter for several days in their home (test) tank, eliminating human handling or human interference induced stress. We demonstrate that the development of cheap and simple automated home-tank-based learning paradigms for the zebrafish is feasible. We argue that such tasks will allow us to better characterize numerous cognitive and mnemonic features of the zebrafish, including elemental as well as configural learning and memory, which will, in turn, also enhance our ability to study neurobiological mechanisms underlying learning and memory using this model organism.
  •  
3.
  • Englund, Mats P., et al. (författare)
  • An inexpensive and accurate method of measuring the force of responses in reaction time research
  • 2009
  • Ingår i: Behavior Research Methods. - : Psychonomic Society. - 1554-351X .- 1554-3528. ; 41:4, s. 1254-1261
  • Tidskriftsartikel (refereegranskat)abstract
    • Together with reaction time (RT), the force with which people respond to stimuli can provide important clues about cognitive and affective processes. We discuss some of the issues surrounding the accurate measurement and interpretation of response force, and present a response key by which response force can be measured regularly and unobtrusively in RT research. The advantage of the response key described is that it operates like a standard response key of the type used regularly in classic RT experiments. The construction of the response key is described in detail and its potential assessed by way of an experiment examining response force in a simple reaction task to visual stimuli of increasing brightness and size.
  •  
4.
  • Fjeld, Morten, 1965, et al. (författare)
  • Epistemic action: a measure for cognitive support in tangible user interfaces?
  • 2009
  • Ingår i: Behavior Research Methods. - 1554-351X .- 1554-3528. ; 41:3, s. 876-881
  • Tidskriftsartikel (refereegranskat)abstract
    • The quality of user interfaces is often measured in terms of efficiency, effectiveness, and satisfaction. In the area of tangible user interfaces, epistemic—or exploratory—action has been suggested as a fourth measure of quality. In a computer game study (Kirsh & Maglio, 1992, 1994), players used epistemic actions to modify the environment, which helped them determine the correct position of blocks with less mental effort. There, the researchers found that it might be easier to physically modify the external world and then interpret it than to compute and interpret a new state mentally. Specifically, epistemic action may be a relevant concept when researching tangible user interfaces incorporating physical handles. This article examines the potential relations between the three traditional measures of usability and epistemic actions using three spatial planning tools with different degrees of physicality. The results indicate that epistemic action is a measure that is independent of the three traditional usability measures: efficiency, effectiveness, and satisfaction. However, epistemic action does not increase linearly with the physicality of a user interface, and it probably is a more complex measure that is also related to the reusability of the interface. Further research is needed to fully understand the potential of this measure.
  •  
5.
  • Issa Mattos, David, 1990, et al. (författare)
  • Bayesian paired comparison with the bpcs package
  • 2022
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-351X .- 1554-3528. ; 54, s. 2025-2045
  • Tidskriftsartikel (refereegranskat)abstract
    • This article introduces the bpcs R package (Bayesian Paired Comparison in Stan) and the statistical models implemented in the package. This package aims to facilitate the use of Bayesian models for paired comparison data in behavioral research. Bayesian analysis of paired comparison data allows parameter estimation even in conditions where the maximum likelihood does not exist, allows easy extension of paired comparison models, provides straightforward interpretation of the results with credible intervals, has better control of type I error, has more robust evidence towards the null hypothesis, allows propagation of uncertainties, includes prior information, and performs well when handling models with many parameters and latent variables. The bpcs package provides a consistent interface for R users and several functions to evaluate the posterior distribution of all parameters to estimate the posterior distribution of any contest between items and to obtain the posterior distribution of the ranks. Three reanalyses of recent studies that used the frequentist Bradley-Terry model are presented. These reanalyses are conducted with the Bayesian models of the bpcs package, and all the code used to fit the models, generate the figures, and the tables are available in the online appendix.
  •  
6.
  • Kaatiala, Jussi, et al. (författare)
  • A graphical user interface for infant ERP analysis
  • 2014
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-351X .- 1554-3528. ; 46:3, s. 745-757
  • Tidskriftsartikel (refereegranskat)abstract
    • Recording of event-related potentials (ERPs) is one of the best-suited technologies for examining brain function in human infants. Yet the existing software packages are not optimized for the unique requirements of analyzing artifact-prone ERP data from infants. We developed a new graphical user interface that enables an efficient implementation of a two-stage approach to the analysis of infant ERPs. In the first stage, video records of infant behavior are synchronized with ERPs at the level of individual trials to reject epochs with noncompliant behavior and other artifacts. In the second stage, the interface calls MATLAB and EEGLAB (Delorme & Makeig, Journal of Neuroscience Methods 134(1):9-21, 2004) functions for further preprocessing of the ERP signal itself (i.e., filtering, artifact removal, interpolation, and rereferencing). Finally, methods are included for data visualization and analysis by using bootstrapped group averages. Analyses of simulated and real EEG data demonstrated that the proposed approach can be effectively used to establish task compliance, remove various types of artifacts, and perform representative visualizations and statistical comparisons of ERPs. The interface is available for download from http://www.uta.fi/med/icl/methods/eeg.html in a format that is widely applicable to ERP studies with special populations and open for further editing by users.
  •  
7.
  • Leppänen, Jukka M, et al. (författare)
  • Widely applicable MATLAB routines for automated analysis of saccadic reaction times
  • 2015
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-351X .- 1554-3528. ; 47:2, s. 538-548
  • Tidskriftsartikel (refereegranskat)abstract
    • Saccadic reaction time (SRT) is a widely used dependent variable in eye-tracking studies of human cognition and its disorders. SRTs are also frequently measured in studies with special populations, such as infants and young children, who are limited in their ability to follow verbal instructions and remain in a stable position over time. In this article, we describe a library of MATLAB routines (Mathworks, Natick, MA) that are designed to (1) enable completely automated implementation of SRT analysis for multiple data sets and (2) cope with the unique challenges of analyzing SRTs from eye-tracking data collected from poorly cooperating participants. The library includes preprocessing and SRT analysis routines. The preprocessing routines (i.e., moving median filter and interpolation) are designed to remove technical artifacts and missing samples from raw eye-tracking data. The SRTs are detected by a simple algorithm that identifies the last point of gaze in the area of interest, but, critically, the extracted SRTs are further subjected to a number of postanalysis verification checks to exclude values contaminated by artifacts. Example analyses of data from 5- to 11-month-old infants demonstrated that SRTs extracted with the proposed routines were in high agreement with SRTs obtained manually from video records, robust against potential sources of artifact, and exhibited moderate to high test-retest stability. We propose that the present library has wide utility in standardizing and automating SRT-based cognitive testing in various populations. The MATLAB routines are open source and can be downloaded from http://www.uta.fi/med/icl/methods.html .
  •  
8.
  • Lidestam, Björn (författare)
  • Audiovisual presentation of video-recorded stimuli at a high frame rate
  • 2014
  • Ingår i: Behavior Research Methods. - : Springer Verlag (Germany). - 1554-351X .- 1554-3528. ; 46:2, s. 499-516
  • Tidskriftsartikel (refereegranskat)abstract
    • A method for creating and presenting video-recorded synchronized audiovisual stimuli at a high frame rate-which would be highly useful for psychophysical studies on, for example, just-noticeable differences and gating-is presented. Methods for accomplishing this include recording audio and video separately using an exact synchronization signal, editing the recordings and finding exact synchronization points, and presenting the synchronized audiovisual stimuli with a desired frame rate on a cathode ray tube display using MATLAB and Psychophysics Toolbox 3. The methods from an empirical gating study (Moradi, Lidestam, and Ronnberg, Frontiers in Psychology 4: 359, 2013) are presented as an example of the implementation of playback at 120 fps.
  •  
9.
  • Łuniewska, Magdalena, et al. (författare)
  • Ratings of age of acquisition of 299 words across 25 languages : is there a cross-linguistic order of words?
  • 2016
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528 .- 1554-351X. ; 48:3, s. 1154-1177
  • Tidskriftsartikel (refereegranskat)abstract
    • We present a new set of subjective age-of-acquisition (AoA) ratings for 299 words (158 nouns, 141 verbs) in 25 languages from five language families (Afro-Asiatic: Semitic languages; Altaic: one Turkic language: Indo-European: Baltic, Celtic, Germanic, Hellenic, Slavic, and Romance languages; Niger-Congo: one Bantu language; Uralic: Finnic and Ugric languages). Adult native speakers reported the age at which they had learned each word. We present a comparison of the AoA ratings across all languages by contrasting them in pairs. This comparison shows a consistency in the orders of ratings across the 25 languages. The data were then analyzed (1) to ascertain how the demographic characteristics of the participants influenced AoA estimations and (2) to assess differences caused by the exact form of the target question (when did you learn vs. when do children learn this word); (3) to compare the ratings obtained in our study to those of previous studies; and (4) to assess the validity of our study by comparison with quasi-objective AoA norms derived from the MacArthur–Bates Communicative Development Inventories (MB-CDI). All 299 words were judged as being acquired early (mostly before the age of 6 years). AoA ratings were associated with the raters’ social or language status, but not with the raters’ age or education. Parents reported words as being learned earlier, and bilinguals reported learning them later. Estimations of the age at which children learn the words revealed significantly lower ratings of AoA. Finally, comparisons with previous AoA and MB-CDI norms support the validity of the present estimations. Our AoA ratings are available for research or other purposes.
  •  
10.
  • Niehorster, Diederick C., et al. (författare)
  • Characterizing gaze position signals and synthesizing noise during fixations in eye-tracking data
  • 2020
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-351X .- 1554-3528. ; 52:6, s. 2515-2534
  • Tidskriftsartikel (refereegranskat)abstract
    • The magnitude of variation in the gaze position signals recorded by an eye tracker, also known as its precision, is an important aspect of an eye tracker’s data quality. However, data quality of eye-tracking signals is still poorly understood. In this paper, we therefore investigate the following: (1) How do the various available measures characterizing eye-tracking data during fixation relate to each other? (2) How are they influenced by signal type? (3) What type of noise should be used to augment eye-tracking data when evaluating eye-movement analysis methods? To support our analysis, this paper presents new measures to characterize signal type and signal magnitude based on RMS-S2S and STD, two established measures of precision. Simulations are performed to investigate how each of these measures depends on the number of gaze position samples over which they are calculated, and to reveal how RMS-S2S and STD relate to each other and to measures characterizing the temporal spectrum composition of the recorded gaze position signal. Further empirical investigations were performed using gaze position data recorded with five eye trackers from human and artificial eyes. We found that although the examined eye trackers produce gaze position signals with different characteristics, the relations between precision measures derived from simulations are borne out by the data. We furthermore conclude that data with a range of signal type values should be used to assess the robustness of eye-movement analysis methods. We present a method for generating artificial eye-tracker noise of any signal type and magnitude.
  •  
11.
  • Nyström, Pär, et al. (författare)
  • The TimeStudio Project : An open source scientific workflow system for the behavioral and brain sciences
  • 2016
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-351X .- 1554-3528. ; 48:2, s. 542-552
  • Tidskriftsartikel (refereegranskat)abstract
    • This article describes a new open source scientific workflow system, the TimeStudio Project, dedicated to the behavioral and brain sciences. The program is written in MATLAB and features a graphical user interface for the dynamic pipelining of computer algorithms developed as TimeStudio plugins. TimeStudio includes both a set of general plugins (for reading data files, modifying data structures, visualizing data structures, etc.) and a set of plugins specifically developed for the analysis of event-related eyetracking data as a proof of concept. It is possible to create custom plugins to integrate new or existing MATLAB code anywhere in a workflow, making TimeStudio a flexible workbench for organizing and performing a wide range of analyses. The system also features an integrated sharing and archiving tool for TimeStudio workflows, which can be used to share workflows both during the data analysis phase and after scientific publication. TimeStudio thus facilitates the reproduction and replication of scientific studies, increases the transparency of analyses, and reduces individual researchers' analysis workload. The project website ( http://timestudioproject.com ) contains the latest releases of TimeStudio, together with documentation and user forums.
  •  
12.
  • Rofes, Adrìa, et al. (författare)
  • Imageability ratings across languages
  • 2018
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-351X .- 1554-3528. ; 50:3, s. 1187-1197
  • Tidskriftsartikel (refereegranskat)abstract
    • Imageability is a psycholinguistic variable that indicates how well a word gives rise to a mental image or sensory experience. Imageability ratings are used extensively in psycholinguistic, neuropsychological, and aphasiological studies. However, little formal knowledge exists about whether and how these ratings are associated between and within languages. Fifteen imageability databases were cross-correlated using nonparametric statistics. Some of these corresponded to unpublished data collected within a European research network—the Collaboration of Aphasia Trialists (COST IS1208). All but four of the correlations were significant. The average strength of the correlations (rho = .68) and the variance explained (R2 = 46%) were moderate. This implies that factors other than imageability may explain 54% of the results. Imageability ratings often correlate across languages. Different possibly interacting factors may explain the moderate strength and variance explained in the correlations: (1) linguistic and cultural factors; (2) intrinsic differences between the databases; (3) range effects; (4) small numbers of words in each database, equivalent words, and participants; and (5) mean age of the participants. The results suggest that imageability ratings may be used cross-linguistically. However, further understanding of the factors explaining the variance in the correlations will be needed before research and practical recommendations can be made.
  •  
13.
  • Rosengren, William, et al. (författare)
  • Modeling and quality assessment of nystagmus eye movements recorded using an eye-tracker
  • 2020
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-351X .- 1554-3528. ; 52:4, s. 1729-1743
  • Tidskriftsartikel (refereegranskat)abstract
    • Mathematical modeling of nystagmus oscillations is a technique with applications in diagnostics, treatment evaluation, and acuity testing. Modeling is a powerful tool for the analysis of nystagmus oscillations but quality assessment of the input data is needed in order to avoid misinterpretation of the modeling results. In this work, we propose a signal quality metric for nystagmus waveforms, the normalized segment error (NSE). The NSE is based on the energy in the error signal between the observed oscillations and a reconstruction from a harmonic sinusoidal model called the normalized waveform model (NWM). A threshold for discrimination between nystagmus oscillations and disturbances is estimated using simulated signals and receiver operator characteristics (ROC). The ROC is optimized to find noisy segments and abrupt waveform and frequency changes in the simulated data that disturb the modeling. The discrimination threshold, 휖, obtained from the ROC analysis, is applied to real recordings of nystagmus data in order to determine whether a segment is of high quality or not. The NWM parameters from both the simulated dataset and the nystagmus recordings are analyzed for the two classes suggested by the threshold. The optimized 휖 yielded a true-positive rate and a false-positive rate of 0.97 and 0.07, respectively, for the simulated data. The results from the NWM parameter analysis show that they are consistent with the known values of the simulated signals, and that the method estimates similar model parameters when performing analysis of repeated recordings from one subject.
  •  
14.
  • Torrance, Mark, et al. (författare)
  • Timed written picture naming in 14 European languages
  • 2018
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-351X .- 1554-3528. ; 50:2, s. 744-758
  • Tidskriftsartikel (refereegranskat)abstract
    • © 2017 The Author(s)We describe the Multilanguage Written Picture Naming Dataset. This gives trial-level data and time and agreement norms for written naming of the 260 pictures of everyday objects that compose the colorized Snodgrass and Vanderwart picture set (Rossion & Pourtois in Perception, 33, 217–236, 2004). Adult participants gave keyboarded responses in their first language under controlled experimental conditions (N = 1,274, with subsamples responding in Bulgarian, Dutch, English, Finnish, French, German, Greek, Icelandic, Italian, Norwegian, Portuguese, Russian, Spanish, and Swedish). We measured the time to initiate a response (RT) and interkeypress intervals, and calculated measures of name and spelling agreement. There was a tendency across all languages for quicker RTs to pictures with higher familiarity, image agreement, and name frequency, and with higher name agreement. Effects of spelling agreement and effects on output rates after writing onset were present in some, but not all, languages. Written naming therefore shows name retrieval effects that are similar to those found in speech, but our findings suggest the need for cross-language comparisons as we seek to understand the orthographic retrieval and/or assembly processes that are specific to written output.
  •  
15.
  • Valtakari, Niilo V., et al. (författare)
  • A field test of computer-vision-based gaze estimation in psychology
  • 2024
  • Ingår i: Behavior Research Methods. - : Springer. - 1554-351X .- 1554-3528. ; 56:3, s. 1900-1915
  • Tidskriftsartikel (refereegranskat)abstract
    • Computer-vision-based gaze estimation refers to techniques that estimate gaze direction directly from video recordings of the eyes or face without the need for an eye tracker. Although many such methods exist, their validation is often found in the technical literature (e.g., computer science conference papers). We aimed to (1) identify which computer-vision-based gaze estimation methods are usable by the average researcher in fields such as psychology or education, and (2) evaluate these methods. We searched for methods that do not require calibration and have clear documentation. Two toolkits, OpenFace and OpenGaze, were found to fulfill these criteria. First, we present an experiment where adult participants fixated on nine stimulus points on a computer screen. We filmed their face with a camera and processed the recorded videos with OpenFace and OpenGaze. We conclude that OpenGaze is accurate and precise enough to be used in screen-based experiments with stimuli separated by at least 11 degrees of gaze angle. OpenFace was not sufficiently accurate for such situations but can potentially be used in sparser environments. We then examined whether OpenFace could be used with horizontally separated stimuli in a sparse environment with infant participants. We compared dwell measures based on OpenFace estimates to the same measures based on manual coding. We conclude that OpenFace gaze estimates may potentially be used with measures such as relative total dwell time to sparse, horizontally separated areas of interest, but should not be used to draw conclusions about measures such as dwell duration.
  •  
16.
  • Valtakari, Niilo V., et al. (författare)
  • Eye tracking in human interaction : Possibilities and limitations
  • 2021
  • Ingår i: Behavior Research Methods. - : Springer Nature. - 1554-351X .- 1554-3528. ; 53:4, s. 1592-1608
  • Tidskriftsartikel (refereegranskat)abstract
    • There is a long history of interest in looking behavior during human interaction. With the advance of (wearable) video-based eye trackers, it has become possible to measure gaze during many different interactions. We outline the different types of eye-tracking setups that currently exist to investigate gaze during interaction. The setups differ mainly with regard to the nature of the eye-tracking signal (head- or world-centered) and the freedom of movement allowed for the participants. These features place constraints on the research questions that can be answered about human interaction. We end with a decision tree to help researchers judge the appropriateness of specific setups.
  •  
17.
  • Wengelin, Åsa, et al. (författare)
  • Combined eyetracking and keystroke-logging methods for studying cognitive processes in text production
  • 2009
  • Ingår i: Behavior Research Methods. - New York : Springer-Verlag New York. - 1554-351X .- 1554-3528. ; 41:2, s. 337-351
  • Tidskriftsartikel (refereegranskat)abstract
    • Writers typically spend a certain proportion of time looking back over the text that they have written. This is likely to serve a number of different functions, which are currently poorly understood. In this article, we present two systems, ScriptLog + TimeLine and EyeWrite, that adopt different and complementary approaches to exploring this activity by collecting and analyzing combined eye movement and keystroke data from writers composing extended texts. ScriptLog + TimeLine is a system that is based on an existing keystroke-logging program and uses heuristic, pattern-matching methods to identify reading episodes within eye movement data. EyeWrite is an integrated editor and analysis system that permits identification of the words that the writer fixates and their location within the developing text. We demonstrate how the methods instantiated within these systems can be used to make sense of the large amount of data generated by eyetracking and keystroke logging in order to inform understanding of the cognitive processes that underlie written text production.
  •  
18.
  • Willander, Johan, et al. (författare)
  • Development of a new Clarity of Auditory Imagery Scale
  • 2010
  • Ingår i: Behavior Research Methods. - 1554-351X .- 1554-3528. ; 42:3, s. 785-790
  • Tidskriftsartikel (refereegranskat)abstract
    • In the psychological study of auditory imagery, instruments for measuring vividness or clarity have existed for some time. The present article argues that existing scales are ambiguous, in that clarity and vividness of auditory imagery are addressed simultaneously, and that empirical validations of those scales suffer from inad- equate methods. The aim of the present study was to develop a new psychometric scale, the Clarity of Auditory Imagery Scale, measuring individual differences in clarity of auditory imagery. Drawing on previous literature, 16 items were generated, forming an initial item pool that was presented to 212 respondents. The hypothesized single dimensionality inherent in the data was confirmed using Velicer’s (1976) minimum average partial test and parallel analysis. Also, data were factor analyzed, extracting a stable one-factor solution including all 16 items. The internal consistency of the final scale was satisfactory (coefficient alpha 5 .88). Other properties of the questionnaire, such as test–retest reliability, remain to be established.
  •  
19.
  • Wilsson, Lowe, et al. (författare)
  • synr : An R package for handling synesthesia consistency test data
  • 2022
  • Ingår i: Behavior Research Methods. - Berlin : Springer Nature. - 1554-351X .- 1554-3528.
  • Tidskriftsartikel (refereegranskat)abstract
    • Synesthesia is a phenomenon where sensory stimuli or cognitive concepts elicit additional perceptual experiences. For instance, in a commonly studied type of synesthesia, stimuli such as words written in black font elicit experiences of other colors, e.g., red. In order to objectively verify synesthesia, participants are asked to choose colors for repeatedly presented stimuli and the consistency of their choices is evaluated (consistency test). Previously, there has been no publicly available and easy-to-use tool for analyzing consistency test results. Here, the R package synr is introduced, which provides an efficient interface for exploring consistency test data and applying common procedures for analyzing them. Importantly, synr also implements a novel method enabling identification of participants whose scores cannot be interpreted, e.g., who only give black or red color responses. To this end, density-based spatial clustering of applications with noise (DBSCAN) is applied in conjunction with a measure of spread in 3D space. An application of synr with pre-existing openly accessible data illustrating how synr is used in practice is presented. Also included is a comparison of synr’s data validation procedure and human ratings, which found that synr had high correspondence with human ratings and outperformed human raters in situations where human raters were easily mislead. Challenges for widespread adoption of synr as well as suggestions for using synr within the field of synesthesia and other areas of psychological research are discussed.
  •  
20.
  • Andersson, Richard, et al. (författare)
  • One algorithm to rule them all? : An evaluation and discussion of ten eye movement event-detection algorithms
  • 2017
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 49:2, s. 616-637
  • Tidskriftsartikel (refereegranskat)abstract
    • Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484–2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.
  •  
21.
  • Anikin, Andrey, et al. (författare)
  • A practical guide to calculating vocal tract length and scale-invariant formant patterns
  • 2023
  • Ingår i: Behavior Research Methods. - 1554-3528.
  • Tidskriftsartikel (refereegranskat)abstract
    • Formants (vocal tract resonances) are increasingly analyzed not only by phoneticians in speech but also by behavioral scientists studying diverse phenomena such as acoustic size exaggeration and articulatory abilities of non-human animals. This often involves estimating vocal tract length acoustically and producing scale-invariant representations of formant patterns. We present a theoretical framework and practical tools for carrying out this work, including open-source software solutions included in R packages soundgen and phonTools. Automatic formant measurement with linear predictive coding is error-prone, but formant_app provides an integrated environment for formant annotation and correction with visual and auditory feedback. Once measured, formants can be normalized using a single recording (intrinsic methods) or multiple recordings from the same individual (extrinsic methods). Intrinsic speaker normalization can be as simple as taking formant ratios and calculating the geometric mean as a measure of overall scale. The regression method implemented in the function estimateVTL calculates the apparent vocal tract length assuming a single-tube model, while its residuals provide a scale-invariant vowel space based on how far each formant deviates from equal spacing (the schwa function). Extrinsic speaker normalization provides more accurate estimates of speaker- and vowel-specific scale factors by pooling information across recordings with simple averaging or mixed models, which we illustrate with example datasets and R code. The take-home messages are to record several calls or vowels per individual, measure at least three or four formants, check formant measurements manually, treat uncertain values as missing, and use the statistical tools best suited to each modeling context.
  •  
22.
  • Anikin, Andrey, et al. (författare)
  • Nonlinguistic vocalizations from online amateur videos for emotion research : A validated corpus
  • 2017
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 49:2, s. 758-771
  • Tidskriftsartikel (refereegranskat)abstract
    • This study introduces a corpus of 260 naturalistic human nonlinguistic vocalizations representing nine emotions: amusement, anger, disgust, effort, fear, joy, pain, pleasure, and sadness. The recognition accuracy in a rating task varied greatly per emotion, from <40% for joy and pain, to >70% for amusement, pleasure, fear, and sadness. In contrast, the raters’ linguistic–cultural group had no effect on recognition accuracy: The predominantly English-language corpus was classified with similar accuracies by participants from Brazil, Russia, Sweden, and the UK/USA. Supervised random forest models classified the sounds as accurately as the human raters. The best acoustic predictors of emotion were pitch, harmonicity, and the spacing and regularity of syllables. This corpus of ecologically valid emotional vocalizations can be filtered to include only sounds with high recognition rates, in order to study reactions to emotional stimuli of known perceptual types (reception side), or can be used in its entirety to study the association between affective states and vocal expressions (production side).
  •  
23.
  • Anikin, Andrey (författare)
  • Soundgen : An open-source tool for synthesizing nonverbal vocalizations
  • 2019
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 51:2, s. 778-792
  • Tidskriftsartikel (refereegranskat)abstract
    • Voice synthesis is a useful method for investigating the communicative role of different acoustic features. Although many text-to-speech systems are available, researchers of human nonverbal vocalizations and bioacousticians may profit from a dedicated simple tool for synthesizing and manipulating natural-sounding vocalizations. Soundgen (https://CRAN.R-project.org/package=soundgen) is an open-source R package that synthesizes nonverbal vocalizations based on meaningful acoustic parameters, which can be specified from the command line or in an interactive app. This tool was validated by comparing the perceived emotion, valence, arousal, and authenticity of 60 recorded human nonverbal vocalizations (screams, moans, laughs, and so on) and their approximate synthetic reproductions. Each synthetic sound was created by manually specifying only a small number of high-level control parameters, such as syllable length and a few anchors for the intonation contour. Nevertheless, the valence and arousal ratings of synthetic sounds were similar to those of the original recordings, and the authenticity ratings were comparable, maintaining parity with the originals for less complex vocalizations. Manipulating the precise acoustic characteristics of synthetic sounds may shed light on the salient predictors of emotion in the human voice. More generally, soundgen may prove useful for any studies that require precise control over the acoustic features of nonspeech sounds, including research on animal vocalizations and auditory perception.
  •  
24.
  •  
25.
  • Breidegard, Björn (författare)
  • Computer based automatic finger and speech tracking system
  • 2007
  • Ingår i: Behavior Research Methods. - 1554-3528. ; 39:4, s. 824-834
  • Tidskriftsartikel (refereegranskat)abstract
    • This article presents the first technology ever for on-line registration and interactive and automatic analysis of finger movements during tactile reading (Braille and tactile pictures). Interactive software has been developed for registration (with two cameras and a microphone), MPEG-2 video compression and storage on disk or DVD as well as an Interactive Analysis Program to aid human analysis. An Automatic Finger Tracking System has been implemented which also semi-automatically tracks the reading aloud speech on the syllable level. This set of tools opens the way for large scale studies of blind people reading Braille or tactile images. It has been tested in a pilot project involving congenitally blind subjects reading texts and pictures.
  •  
26.
  • Byrne, Sean Anthony, et al. (författare)
  • Precise localization of corneal reflections in eye images using deep learning trained on synthetic data
  • Ingår i: Behavior Research Methods. - 1554-3528.
  • Tidskriftsartikel (refereegranskat)abstract
    • We present a deep learning method for accurately localizing the center of a single corneal reflection (CR) in an eye image. Unlike previous approaches, we use a convolutional neural network (CNN) that was trained solely using synthetic data. Using only synthetic data has the benefit of completely sidestepping the time-consuming process of manual annotation that is required for supervised training on real eye images. To systematically evaluate the accuracy of our method, we first tested it on images with synthetic CRs placed on different backgrounds and embedded in varying levels of noise. Second, we tested the method on two datasets consisting of high-quality videos captured from real eyes. Our method outperformed state-of-the-art algorithmic methods on real eye images with a 3-41.5% reduction in terms of spatial precision across data sets, and performed on par with state-of-the-art on synthetic images in terms of spatial accuracy. We conclude that our method provides a precise method for CR center localization and provides a solution to the data availability problem, which is one of the important common roadblocks in the development of deep learning models for gaze estimation. Due to the superior CR center localization and ease of application, our method has the potential to improve the accuracy and precision of CR-based eye trackers.
  •  
27.
  • Bååth, Rasmus (författare)
  • Estimating the distribution of sensorimotor synchronization data : A Bayesian hierarchical modeling approach.
  • 2015
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528.
  • Tidskriftsartikel (refereegranskat)abstract
    • The sensorimotor synchronization paradigm is used when studying the coordination of rhythmic motor responses with a pacing stimulus and is an important paradigm in the study of human timing and time perception. Two measures of performance frequently calculated using sensorimotor synchronization data are the average offset and variability of the stimulus-to-response asynchronies-the offsets between the stimuli and the motor responses. Here it is shown that assuming that asynchronies are normally distributed when estimating these measures can result in considerable underestimation of both the average offset and variability. This is due to a tendency for the distribution of the asynchronies to be bimodal and left skewed when the interstimulus interval is longer than 2 s. It is argued that (1) this asymmetry is the result of the distribution of the asynchronies being a mixture of two types of responses-predictive and reactive-and (2) the main interest in a sensorimotor synchronization study is the predictive responses. A Bayesian hierarchical modeling approach is proposed in which sensorimotor synchronization data are modeled as coming from a right-censored normal distribution that effectively separates the predictive responses from the reactive responses. Evaluation using both simulated data and experimental data from a study by Repp and Doggett (2007) showed that the proposed approach produces more precise estimates of the average offset and variability, with considerably less underestimation.
  •  
28.
  • Děchtěrenko, Filip, et al. (författare)
  • Flipping the stimulus : effects on scanpath coherence?
  • 2016
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528.
  • Tidskriftsartikel (refereegranskat)abstract
    • In experiments investigating dynamic tasks, it is often useful to examine eye movement scan patterns. We can present trials repeatedly and compute within-subjects/conditions similarity in order to distinguish between signal and noise in gaze data. To avoid obvious repetitions of trials, filler trials must be added to the experimental protocol, resulting in long experiments. Alternatively, trials can be modified to reduce the chances that the participant will notice the repetition, while avoiding significant changes in the scan patterns. In tasks in which the stimuli can be geometrically transformed without any loss of meaning, flipping the stimuli around either of the axes represents a candidate modification. In this study, we examined whether flipping of stimulus object trajectories around the x- and y-axes resulted in comparable scan patterns in a multiple object tracking task. We developed two new strategies for the statistical comparison of similarity between two groups of scan patterns, and then tested those strategies on artificial data. Our results suggest that although the scan patterns in flipped trials differ significantly from those in the original trials, this difference is small (as little as a 13 % increase of overall distance). Therefore, researchers could use geometric transformations to test more complex hypotheses regarding scan pattern coherence while retaining the same duration for experiments.
  •  
29.
  • Dewhurst, Richard, et al. (författare)
  • It depends on how you look at it: Scanpath comparison in multiple dimensions with MultiMatch, a vector-based approach
  • 2012
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528.
  • Tidskriftsartikel (refereegranskat)abstract
    • Eye movement sequences---or scanpaths---vary depending on stimulus characteristics and task (Foulsham \& Underwood, 2008; Land, Mennie, \& Rusted, 1999). Common methods for comparing scanpaths, however, are limited in their ability to capture both the spatial and temporal properties of which a scanpath consists. Here we validate a new method for scanpath comparison based on geometric vectors, which compares scanpaths over multiple dimensions retaining positional and sequential information (Jarodzka, Holmqvist, \& Nyström, 2010). `MultiMatch' was tested in two experiments and pitted against ScanMatch (Cristino, Mathôt, Theeuwes, \& Gilchrist, 2010), the most comprehensive adaptation of the popular Levenshtein method. Experiment 1 used synthetic data, demonstrating the greater sensitivity of MultiMatch to variations in spatial position. In experiment 2 real eye movement recordings were taken from participants viewing sequences of dots, designed to elicit scanpath pairs with commonalities known to be problematic for algorithms (for example, when one scanpath is shifted in locus, or fixations fall either side of an AOI boundary). Results illustrate the advantages of a multidimensional approach, revealing how two scanpath differ. For instance, if one scanpath is the reverse copy of another the difference is in direction but not the position of fixations; or if a scanpath is scaled down, the difference is in the length of saccadic vectors but not overall shape. As well as having enormous potential for any task in which consistency in eye movements is important (e.g. learning), MultiMatch is particularly relevant for "eye movements to nothing" in mental imagery research and embodiment of cognition, where satisfactory scanpath comparison algorithms are lacking.
  •  
30.
  • Dunn, Matt J, et al. (författare)
  • Minimal reporting guideline for research involving eye tracking (2023 edition)
  • Ingår i: Behavior Research Methods. - 1554-3528.
  • Tidskriftsartikel (refereegranskat)abstract
    • A guideline is proposed that comprises the minimum items to be reported in research studies involving an eye tracker and human or non-human primate participant(s). This guideline was developed over a 3-year period using a consensus-based process via an open invitation to the international eye tracking community. This guideline will be reviewed at maximum intervals of 4 years.
  •  
31.
  •  
32.
  •  
33.
  • Hessels, Roy S., et al. (författare)
  • Noise-robust fixation detection in eye movement data : Identification by two-means clustering (I2MC)
  • 2017
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 49:5, s. 1802-1823
  • Tidskriftsartikel (refereegranskat)abstract
    • Eye-tracking research in infants and older children has gained a lot of momentum over the last decades. Although eye-tracking research in these participant groups has become easier with the advance of the remote eye-tracker, this often comes at the cost of poorer data quality than in research with well-trained adults (Hessels, Andersson, Hooge, Nyström, & Kemner Infancy, 20, 601-633, 2015; Wass, Forssman, & Leppänen Infancy, 19, 427-460, 2014). Current fixation detection algorithms are not built for data from infants and young children. As a result, some researchers have even turned to hand correction of fixation detections (Saez de Urabain, Johnson, & Smith Behavior Research Methods, 47, 53-72, 2015). Here we introduce a fixation detection algorithm-identification by two-means clustering (I2MC)-built specifically for data across a wide range of noise levels and when periods of data loss may occur. We evaluated the I2MC algorithm against seven state-of-the-art event detection algorithms, and report that the I2MC algorithm's output is the most robust to high noise and data loss levels. The algorithm is automatic, works offline, and is suitable for eye-tracking data recorded with remote or tower-mounted eye-trackers using static stimuli. In addition to application of the I2MC algorithm in eye-tracking research with infants, school children, and certain patient groups, the I2MC algorithm also may be useful when the noise and data loss levels are markedly different between trials, participants, or time points (e.g., longitudinal research).
  •  
34.
  • Holmqvist, Kenneth, et al. (författare)
  • A method for quantifying focused versus overview behavior in AOI sequences
  • 2011
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 43:4, s. 987-998
  • Tidskriftsartikel (refereegranskat)abstract
    • We present a new measure for evaluating focused versus overview eye movement behavior in a stimulus divided by areas of interest. The measure can be used for overall data, as well as data over time. Using data from an ongoing project with mathematical problem solving, we describe how to calculate the measure and how to carry out a statistical evaluation of the results.
  •  
35.
  • Holmqvist, Kenneth, et al. (författare)
  • Eye tracking : empirical foundations for a minimal reporting guideline
  • 2023
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 55:1, s. 364-416
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
  •  
36.
  • Hooge, Ignace, et al. (författare)
  • Is human classification by experienced untrained observers a gold standard in fixation detection?
  • 2018
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 50:5, s. 1864-1881
  • Tidskriftsartikel (refereegranskat)abstract
    • Despite early reports and the contemporary consensus on microsaccades as purely binocular phenomena, recent work has proposed not only the existence of monocular microsaccades, but also that they serve functional purposes. We take a critical look at the detection of monocular microsaccades from a signal perspective, using raw data and a state-of-the-art, video-based eye tracker. In agreement with previous work, monocular detections were present in all participants using a standard microsaccade detection algorithm. However, a closer look at the raw data invalidates the vast majority of monocular detections. These results again raise the question of the existence of monocular microsaccades, as well as the need for improved methods to study small eye movements recorded with video-based eye trackers.
  •  
37.
  • Hooge, Ignace T. C., et al. (författare)
  • Fixation classification: how to merge and select fixation candidates
  • 2022
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 54:6, s. 2765-2776
  • Tidskriftsartikel (refereegranskat)abstract
    • Eye trackers are applied in many research fields (e.g., cognitive science, medicine, marketing research). To give meaning to the eye-tracking data, researchers have a broad choice of classification methods to extract various behaviors (e.g., saccade, blink, fixation) from the gaze signal. There is extensive literature about the different classification algorithms. Surprisingly, not much is known about the effect of fixation and saccade selection rules that are usually (implicitly) applied. We want to answer the following question: What is the impact of the selection-rule parameters (minimal saccade amplitude and minimal fixation duration) on the distribution of fixation durations? To answer this question, we used eye-tracking data with high and low quality and seven different classification algorithms. We conclude that selection rules play an important role in merging and selecting fixation candidates. For eye-tracking data with good-to-moderate precision (RMSD < 0.5∘), the classification algorithm of choice does not matter too much as long as it is sensitive enough and is followed by a rule that selects saccades with amplitudes larger than 1.0∘ and a rule that selects fixations with duration longer than 60 ms. Because of the importance of selection, researchers should always report whether they performed selection and the values of their parameters.
  •  
38.
  • Hooge, Ignace T C, et al. (författare)
  • How robust are wearable eye trackers to slow and fast head and body movements?
  • 2023
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 55:8
  • Tidskriftsartikel (refereegranskat)abstract
    • How well can modern wearable eye trackers cope with head and body movement? To investigate this question, we asked four participants to stand still, walk, skip, and jump while fixating a static physical target in space. We did this for six different eye trackers. All the eye trackers were capable of recording gaze during the most dynamic episodes (skipping and jumping). The accuracy became worse as movement got wilder. During skipping and jumping, the biggest error was 5.8 ∘. However, most errors were smaller than 3 ∘. We discuss the implications of decreased accuracy in the context of different research scenarios.
  •  
39.
  • Hooge, Ignace T C, et al. (författare)
  • Large eye-head gaze shifts measured with a wearable eye tracker and an industrial camera
  • Ingår i: Behavior Research Methods. - 1554-3528.
  • Tidskriftsartikel (refereegranskat)abstract
    • We built a novel setup to record large gaze shifts (up to 140[Formula: see text]). The setup consists of a wearable eye tracker and a high-speed camera with fiducial marker technology to track the head. We tested our setup by replicating findings from the classic eye-head gaze shift literature. We conclude that our new inexpensive setup is good enough to investigate the dynamics of large eye-head gaze shifts. This novel setup could be used for future research on large eye-head gaze shifts, but also for research on gaze during e.g., human interaction. We further discuss reference frames and terminology in head-free eye tracking. Despite a transition from head-fixed eye tracking to head-free gaze tracking, researchers still use head-fixed eye movement terminology when discussing world-fixed gaze phenomena. We propose to use more specific terminology for world-fixed phenomena, including gaze fixation, gaze pursuit, and gaze saccade.
  •  
40.
  • Hooge, Ignace T C, et al. (författare)
  • The pupil-size artefact (PSA) across time, viewing direction, and different eye trackers
  • 2021
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 53:5, s. 1986-2006
  • Tidskriftsartikel (refereegranskat)abstract
    • The pupil size artefact (PSA) is the gaze deviation reported by an eye tracker during pupil size changes if the eye does not rotate. In the present study, we ask three questions: 1) how stable is the PSA over time, 2) does the PSA depend on properties of the eye tracker set up, and 3) does the PSA depend on the participants' viewing direction? We found that the PSA is very stable over time for periods as long as 1 year, but may differ between participants. When comparing the magnitude of the PSA between eye trackers, we found the magnitude of the obtained PSA to be related to the direction of the eye-tracker-camera axis, suggesting that the angle between the participants' viewing direction and the camera axis affects the PSA. We then investigated the PSA as a function of the participants' viewing direction. The PSA was non-zero for viewing direction 0∘ and depended on the viewing direction. These findings corroborate the suggestion by Choe et al. (Vision Research 118(6755):48-59, 2016), that the PSA can be described by an idiosyncratic and a viewing direction-dependent component. Based on a simulation, we cannot claim that the viewing direction-dependent component of the PSA is caused by the optics of the cornea.
  •  
41.
  • Lassalle, A, et al. (författare)
  • The EU-Emotion Voice Database
  • 2019
  • Ingår i: Behavior research methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 51:2, s. 493-506
  • Tidskriftsartikel (refereegranskat)
  •  
42.
  •  
43.
  • Niehorster, Diederick C, et al. (författare)
  • GlassesValidator : A data quality tool for eye tracking glasses
  • Ingår i: Behavior Research Methods. - 1554-3528.
  • Tidskriftsartikel (refereegranskat)abstract
    • According to the proposal for a minimum reporting guideline for an eye tracking study by Holmqvist et al. (2022), the accuracy (in degrees) of eye tracking data should be reported. Currently, there is no easy way to determine accuracy for wearable eye tracking recordings. To enable determining the accuracy quickly and easily, we have produced a simple validation procedure using a printable poster and accompanying Python software. We tested the poster and procedure with 61 participants using one wearable eye tracker. In addition, the software was tested with six different wearable eye trackers. We found that the validation procedure can be administered within a minute per participant and provides measures of accuracy and precision. Calculating the eye-tracking data quality measures can be done offline on a simple computer and requires no advanced computer skills.
  •  
44.
  • Niehorster, Diederick C., et al. (författare)
  • GlassesViewer : Open-source software for viewing and analyzing data from the Tobii Pro Glasses 2 eye tracker
  • 2020
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528.
  • Tidskriftsartikel (refereegranskat)abstract
    • We present GlassesViewer, open-source software for viewing and analyzing eye-tracking data of the Tobii Pro Glasses 2 head-mounted eye tracker as well as the scene and eye videos and other data streams (pupil size, gyroscope, accelerometer, and TTL input) that this headset can record. The software provides the following functionality written in MATLAB: (1) a graphical interface for navigating the study- and recording structure produced by the Tobii Glasses 2; (2) functionality to unpack, parse, and synchronize the various data and video streams comprising a Glasses 2 recording; and (3) a graphical interface for viewing the Glasses 2’s gaze direction, pupil size, gyroscope and accelerometer time-series data, along with the recorded scene and eye camera videos. In this latter interface, segments of data can furthermore be labeled through user-provided event classification algorithms or by means of manual annotation. Lastly, the toolbox provides integration with the GazeCode tool by Benjamins et al. (2018), enabling a completely open-source workflow for analyzing Tobii Pro Glasses 2 recordings.
  •  
45.
  • Niehorster, Diederick C, et al. (författare)
  • Is apparent fixational drift in eye-tracking data due to filters or eyeball rotation?
  • 2021
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 53:1, s. 311-324
  • Tidskriftsartikel (refereegranskat)abstract
    • Eye trackers are sometimes used to study the miniature eye movements such as drift that occur while observers fixate a static location on a screen. Specifically, analysis of such eye-tracking data can be performed by examining the temporal spectrum composition of the recorded gaze position signal, allowing to assess its color. However, not only rotations of the eyeball but also filters in the eye tracker may affect the signal's spectral color. Here, we therefore ask whether colored, as opposed to white, signal dynamics in eye-tracking recordings reflect fixational eye movements, or whether they are instead largely due to filters. We recorded gaze position data with five eye trackers from four pairs of human eyes performing fixation sequences, and also from artificial eyes. We examined the spectral color of the gaze position signals produced by the eye trackers, both with their filters switched on, and for unfiltered data. We found that while filtered data recorded from both human and artificial eyes were colored for all eye trackers, for most eye trackers the signal was white when examining both unfiltered human and unfiltered artificial eye data. These results suggest that color in the eye-movement recordings was due to filters for all eye trackers except the most precise eye tracker where it may partly reflect fixational eye movements. As such, researchers studying fixational eye movements should be careful to examine the properties of the filters in their eye tracker to ensure they are studying eyeball rotation and not filter properties.
  •  
46.
  • Niehorster, Diederick C, et al. (författare)
  • SMITE : A toolbox for creating Psychophysics Toolbox and PsychoPy experiments with SMI eye trackers
  • 2020
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 52:1, s. 295-304
  • Tidskriftsartikel (refereegranskat)abstract
    • We present SMITE, a toolbox for the measurement of eye movements using eye trackers manufactured by SMI GmbH. The toolbox provides a wrapper around the iViewX SDK provided by SMI, allowing simple integration of SMI eye trackers into Psychophysics Toolbox and PsychoPy programs. The toolbox provides a graphical interface for participant setup and calibration that is implemented natively in Psychophysics Toolbox and PsychoPy drawing commands, as well as providing several convenience features for, inter alia, creating gaze-contingent experiments and working with two-computer setups. Given that SMI GmbH and its support department have closed down, it is expected that this toolbox will provide owners of SMI eye trackers with an important new way to continue to create experiments with their systems. The eye trackers supported by this toolbox are the SMI HiSpeed 1250, SMI RED systems, SMI RED-m, SMI RED250mobile, and SMI REDn.
  •  
47.
  • Niehorster, Diederick C., et al. (författare)
  • The impact of slippage on the data quality of head-worn eye trackers
  • 2020
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528.
  • Tidskriftsartikel (refereegranskat)abstract
    • Mobile head-worn eye trackers allow researchers to record eye-movement data as participants freely move around and interact with their surroundings. However, participant behavior may cause the eye tracker to slip on the participant’s head, potentially strongly affecting data quality. To investigate how this eye-tracker slippage affects data quality, we designed experiments in which participants mimic behaviors that can cause a mobile eye tracker to move. Specifically, we investigated data quality when participants speak, make facial expressions, and move the eye tracker. Four head-worn eye-tracking setups were used: (i) Tobii Pro Glasses 2 in 50 Hz mode, (ii) SMI Eye Tracking Glasses 2.0 60 Hz, (iii) Pupil-Labs’ Pupil in 3D mode, and (iv) Pupil-Labs’ Pupil with the Grip gaze estimation algorithm as implemented in the EyeRecToo software. Our results show that whereas gaze estimates of the Tobii and Grip remained stable when the eye tracker moved, the other systems exhibited significant errors (0.8–3.1∘ increase in gaze deviation over baseline) even for the small amounts of glasses movement that occurred during the speech and facial expressions tasks. We conclude that some of the tested eye-tracking setups may not be suitable for investigating gaze behavior when high accuracy is required, such as during face-to-face interaction scenarios. We recommend that users of mobile head-worn eye trackers perform similar tests with their setups to become aware of its characteristics. This will enable researchers to design experiments that are robust to the limitations of their particular eye-tracking setup.
  •  
48.
  • Niehorster, Diederick C., et al. (författare)
  • Titta : A toolbox for creating PsychToolbox and Psychopy experiments with Tobii eye trackers
  • 2020
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528.
  • Tidskriftsartikel (refereegranskat)abstract
    • We present Titta, an open-source toolbox for controlling eye trackers manufactured by Tobii AB from MATLAB and Python. The toolbox provides a wrapper around the Tobii Pro SDK, providing a convenient graphical participant setup, calibration and validation interface implemented using the PsychToolbox and PsychoPy toolboxes. The toolbox furthermore enables MATLAB and Python experiments to communicate with Tobii Pro Lab through the TalkToProLab tool. This enables experiments to be created and run using the freedom of MATLAB and Python, while the recording can be visualized and analyzed in Tobii Pro Lab. All screen-mounted Tobii eye trackers that are supported by the Tobii Pro SDK are also supported by Titta. At the time of writing, these are the Spectrum, Nano, TX300, T60XL, X3-120, X2-60, X2-30, X60, X120, T60 and T120 from Tobii Pro, and the 4C from Tobii Tech.
  •  
49.
  • Niehorster, Diederick C, et al. (författare)
  • What to expect from your remote eye-tracker when participants are unrestrained
  • 2018
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 50:1, s. 213-227
  • Tidskriftsartikel (refereegranskat)abstract
    • The marketing materials of remote eye-trackers suggest that data quality is invariant to the position and orientation of the participant as long as the eyes of the participant are within the eye-tracker's headbox, the area where tracking is possible. As such, remote eye-trackers are marketed as allowing the reliable recording of gaze from participant groups that cannot be restrained, such as infants, schoolchildren and patients with muscular or brain disorders. Practical experience and previous research, however, tells us that eye-tracking data quality, e.g. the accuracy of the recorded gaze position and the amount of data loss, deteriorates (compared to well-trained participants in chinrests) when the participant is unrestrained and assumes a non-optimal pose in front of the eye-tracker. How then can researchers working with unrestrained participants choose an eye-tracker? Here we investigated the performance of five popular remote eye-trackers from EyeTribe, SMI, SR Research, and Tobii in a series of tasks where participants took on non-optimal poses. We report that the tested systems varied in the amount of data loss and systematic offsets observed during our tasks. The EyeLink and EyeTribe in particular had large problems. Furthermore, the Tobii eye-trackers reported data for two eyes when only one eye was visible to the eye-tracker. This study provides practical insight into how popular remote eye-trackers perform when recording from unrestrained participants. It furthermore provides a testing method for evaluating whether a tracker is suitable for studying a certain target population, and that manufacturers can use during the development of new eye-trackers.
  •  
50.
  • Nirme, Jens, et al. (författare)
  • Motion capture-based animated characters for the study of speech-gesture integration
  • 2020
  • Ingår i: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 52:3, s. 1339-1354
  • Tidskriftsartikel (refereegranskat)abstract
    • Digitally animated characters are promising tools in research studying how we integrate information from speech and visual sources such as gestures because they allow specific gesture features to be manipulated in isolation. We present an approach combining motion capture and 3D-animated characters that allows us to manipulate natural individual gesture strokes for experimental purposes, for example to temporally shift and present gestures in ecologically valid sequences. We exemplify how such stimuli can be used in an experiment investigating implicit detection of speech–gesture (a) synchrony, and discuss the general applicability of the workflow for research in this domain.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 65
Typ av publikation
tidskriftsartikel (65)
Typ av innehåll
refereegranskat (65)
Författare/redaktör
Niehorster, Diederic ... (26)
Nyström, Marcus (22)
Holmqvist, Kenneth (16)
Andersson, Richard (9)
Zemblys, Raimondas (6)
Falck-Ytter, Terje (3)
visa fler...
Anikin, Andrey (3)
Stridh, Martin (3)
Zenil, H (2)
Nyström, Pär, 1975- (2)
Bolte, S (2)
Lundqvist, D (2)
Johansson, Roger (2)
Hooge, Ignace (2)
Håkansson, Gisela (1)
Miller, R. (1)
Johansson, Petter (1)
Hall, Lars (1)
Meyer, M. (1)
Schmidt, K. (1)
Wang, Dong (1)
Nyström, Pär (1)
Gredebäck, Gustaf (1)
Wengelin, Åsa (1)
Persson, Tomas (1)
Johansson, Victoria (1)
Fjeld, Morten, 1965 (1)
Gullberg, Marianne (1)
Soares, SC (1)
Blom Johansson, Moni ... (1)
van de Weijer, Joost (1)
Gulz, Agneta (1)
van Waes, Luuk (1)
Larsson, Linnéa (1)
Bååth, Rasmus (1)
Lidestam, Björn (1)
Andra, Chiara (1)
Lindström, Paulina (1)
Arzarello, Ferdinand ... (1)
Robutti, Ornella (1)
Fiedler, Susann (1)
Lind, Andreas (1)
Barreda, Santiago (1)
Reby, David (1)
Nirme, Jens (1)
Haake, Magnus (1)
Oostenveld, R (1)
Wengelin, Åsa, 1968 (1)
Arias, Pablo (1)
Aucouturier, Jean Ju ... (1)
visa färre...
Lärosäte
Lunds universitet (44)
Karolinska Institutet (11)
Uppsala universitet (7)
Göteborgs universitet (4)
Stockholms universitet (3)
Linköpings universitet (2)
visa fler...
Chalmers tekniska högskola (2)
Högskolan Kristianstad (1)
Högskolan i Halmstad (1)
visa färre...
Språk
Engelska (65)
Forskningsämne (UKÄ/SCB)
Samhällsvetenskap (41)
Naturvetenskap (19)
Humaniora (8)
Teknik (6)
Medicin och hälsovetenskap (3)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy