SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Tirado Carlos) srt2:(2019)"

Search: WFRF:(Tirado Carlos) > (2019)

  • Result 1-4 of 4
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Buijsman, Stefan, et al. (author)
  • Spatial-numerical associations : Shared symbolic and non-symbolic numerical representations
  • 2019
  • In: Quarterly Journal of Experimental Psychology. - : SAGE Publications. - 1747-0218 .- 1747-0226. ; 72:10, s. 2423-2436
  • Journal article (peer-reviewed)abstract
    • During the last decades, there have been a large number of studies into the number-related abilities of humans. As a result, we know that humans and non-human animals have a system known as the approximate number system that allows them to distinguish between collections based on their number of items, separately from any counting procedures. Dehaene and others have argued for a model on which this system uses representations for numbers that are spatial in nature and are shared by our symbolic and non-symbolic processing of numbers. However, there is a conflicting theoretical perspective in which there are no representations of numbers underlying the approximate number system, but only quantity-related representations. This perspective would then suggest that there are no shared representations between symbolic and non-symbolic processing. We review the evidence on spatial biases resulting from the activation of numerical representations, for both non-symbolic and symbolic tests. These biases may help decide between the theoretical differences; shared representations are expected to lead to similar biases regardless of the format, whereas different representations more naturally explain differences in biases, and thus behaviour. The evidence is not yet decisive, as the behavioural evidence is split: we expect bisection tasks to eventually favour shared representations, whereas studies on the spatial-numerical association of response codes (SNARC) effect currently favour different representations. We discuss how this impasse may be resolved, in particular, by combining these behavioural studies with relevant neuroimaging data. If this approach is carried forward, then it may help decide which of these two theoretical perspectives on number representations is correct.
  •  
2.
  • Marmolejo-Ramos, Fernando, et al. (author)
  • The Allocation of Valenced Percepts Onto 3D Space
  • 2019
  • In: Frontiers in Psychology. - : Frontiers Media SA. - 1664-1078. ; 10
  • Journal article (peer-reviewed)abstract
    • Research on the metaphorical mapping of valenced concepts onto space indicates that positive, neutral, and negative concepts are mapped onto upward, midward, and downward locations, respectively. More recently, this type of research has been tested for the very first time in 3D physical space. The findings corroborate the mapping of valenced concepts onto the vertical space as described above but further show that positive and negative concepts are placed close to and away from the body; neutral concepts are placed midway. The current study aimed at investigating whether valenced perceptual stimuli are positioned onto 3D space akin to the way valenced concepts are positioned. By using a unique device known as the cognition cube, participants placed visual, auditory, tactile and olfactory stimuli on 3D space. The results mimicked the placing of valenced concepts onto 3D space; i.e., positive percepts were placed in upward and close-to-the-body locations and negative percepts were placed in downward and away-from-the-body locations; neutral percepts were placed midway. These pattern of results was more pronounced in the case of visual stimuli, followed by auditory, tactile, and olfactory stimuli. Significance Statement Just recently, a unique device called the cognition cube (CC) enabled to find that positive words are mapped onto upward and close-to-the-body locations and negative words are mapped onto downward and away-from-the-body locations; neutral words are placed midway. This way of placing words in relation to the body is consistent with an approach-avoidance effect such that good and bad things are kept close to and away from one's body. We demonstrate for the very first time that this same pattern emerges when visual, auditory, tactile, and olfactory perceptual stimuli are placed on 3D physical space. We believe these results are significant in that the CC can be used as a new tool to diagnose emotion-related disorders.
  •  
3.
  • Nilsson, Mats E., et al. (author)
  • Psychoacoustic evidence for stronger discrimination suppression of spatial information conveyed by lag-click interaural time than interaural level differences
  • 2019
  • In: Journal of the Acoustical Society of America. - : Acoustical Society of America (ASA). - 0001-4966 .- 1520-8524. ; 145:1, s. 512-524
  • Journal article (peer-reviewed)abstract
    • Listeners have limited access to spatial information in lagging sound, a phenomenon known as discrimination suppression. It is unclear whether discrimination suppression works differently for interaural time differences (ITDs) and interaural level differences (ILDs). To explore this, three listeners assessed the lateralization (left or right) and detection (present or not) of lag clicks with a large fixed ITD (350 mu s) or ILD (10 dB) following a diotic lead click, with inter-click intervals (ICIs) of 0.125-256 ms. Performance was measured on a common scale for both cues: the lag-lead amplitude ratio [dB] at 75% correct answers. The main finding was that the lateralization thresholds, but not detection thresholds, were more strongly elevated for ITD-only than ILD-only clicks at intermediate ICIs (1-8 ms) in which previous research has found the strongest discrimination suppression effects. Altogether, these findings suggest that discrimination suppression involves mechanisms that make spatial information conveyed by lag-click ITDs less accessible to listeners than spatial information conveyed by lag-click ILDs.
  •  
4.
  • Tirado, Carlos, et al. (author)
  • The Echobot : An automated system for stimulus presentation in studies of human echolocation
  • 2019
  • In: PLOS ONE. - : Public Library of Science (PLoS). - 1932-6203. ; 14:10
  • Journal article (peer-reviewed)abstract
    • Echolocation is the detection and localization of objects by listening to the sounds they reflect. Early studies of human echolocation used real objects that the experimental leader positioned manually before each experimental trial. The advantage of this procedure is the use of realistic stimuli; the disadvantage is that manually shifting stimuli between trials is very time consuming making it difficult to use psychophysical methods based on the presentation of hundreds of stimuli. The present study tested a new automated system for stimulus presentation, the Echobot, that overcomes this disadvantage. We tested 15 sighted participants with no prior experience of echolocation on their ability to detect the reflection of a loudspeaker-generated click from a 50 cm circular aluminum disk. The results showed that most participants were able to detect the sound reflections. Performance varied considerably, however, with mean individual thresholds of detection ranging from 1 to 3.2 m distance from the disk. Three participants in the loudspeaker experiment also tested using self-generated vocalization. One participant performed better using vocalization and one much worse than in the loudspeaker experiment, illustrating that performance in echolocation experiments using vocalizations not only measures the ability to detect sound reflections, but also the ability to produce efficient echolocation signals. Overall, the present experiments show that the Echobot may be a useful tool in research on human echolocation.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-4 of 4

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view