SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Nair Vipul) "

Sökning: WFRF:(Nair Vipul)

  • Resultat 1-10 av 11
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Billing, Erik, PhD, 1981-, et al. (författare)
  • The DREAM Dataset : Supporting a data-driven study of autism spectrum disorder and robot enhanced therapy
  • 2020
  • Ingår i: PLOS ONE. - : Public Library of Science. - 1932-6203. ; 15:8
  • Tidskriftsartikel (refereegranskat)abstract
    • We present a dataset of behavioral data recorded from 61 children diagnosed with Autism Spectrum Disorder (ASD). The data was collected during a large-scale evaluation of Robot Enhanced Therapy (RET). The dataset covers over 3000 therapy sessions and more than 300 hours of therapy. Half of the children interacted with the social robot NAO supervised by a therapist. The other half, constituting a control group, interacted directly with a therapist. Both groups followed the Applied Behavior Analysis (ABA) protocol. Each session was recorded with three RGB cameras and two RGBD (Kinect) cameras, providing detailed information of children’s behavior during therapy. This public release of the dataset comprises body motion, head position and orientation, and eye gaze variables, all specified as 3D data in a joint frame of reference. In addition, metadata including participant age, gender, and autism diagnosis (ADOS) variables are included. We release this data with the hope of supporting further data-driven studies towards improved therapy methods as well as a better understanding of ASD in general.
  •  
2.
  • Hemeren, Paul, et al. (författare)
  • Actions, intentions and environmental constraints in biological motion perception
  • 2018
  • Ingår i: Spatial Cognition in a Multimedia and Intercultural World. - : Springer. ; , s. S8-S8
  • Konferensbidrag (refereegranskat)abstract
    • In many ways, human cognition is importantly predictive. We predict the sensory consequences of our own actions, but we also predict, and react to, the sensory consequences of how others experience their own actions. This ability extends to perceiving the intentions of other humans based on past and current actions. We present research results that show that social aspects and future movement patterns can be predicted from fairly simple kinematic patterns in biological motion sequences. The purpose of this presentation is to demonstrate and discuss the different environmental (gravity and perspective) and bodily constraints on understanding our social and movement-based interactions with others. In a series of experiments, we have used psychophysical methods and recordings from interactions with objects in natural settings. This includes experiments on the incidental processing of biological motion as well as driving simulator studies that examine the role of kinematic patterns of cyclists and driver’s accuracy to predict the cyclist’s intentions in traffic.  The results we present show both clear effects of “low-level” biological motion factors, such as opponent motion, on the incidental triggering of attention in basic perceptual tasks and “higher-lever” top-down guided perception in the intention prediction of cyclist behavior. We propose to use our results to stimulate discussion about the interplay between expectation mediated and stimulus driven effects of visual processing in spatial cognition the context of human interaction. Such discussion will include the role of context in gesture recognition and to what extent our visual system can handle visually complex environments.
  •  
3.
  • Hemeren, Paul, et al. (författare)
  • Similarity Judgments of Hand-Based Actions : From Human Perception to a Computational Model
  • 2019
  • Ingår i: 42nd European Conference on Visual Perception (ECVP) 2019 Leuven. - : Sage Publications. ; , s. 79-79
  • Konferensbidrag (refereegranskat)abstract
    • How do humans perceive actions in relation to other similar actions? How can we develop artificial systems that can mirror this ability? This research uses human similarity judgments of point-light actions to evaluate the output from different visual computing algorithms for motion understanding, based on movement, spatial features, motion velocity, and curvature. The aim of the research is twofold: (a) to devise algorithms for motion segmentation into action primitives, which can then be used to build hierarchical representations for estimating action similarity and (b) to develop a better understanding of human actioncategorization in relation to judging action similarity. The long-term goal of the work is to allow an artificial system to recognize similar classes of actions, also across different viewpoints. To this purpose, computational methods for visual action classification are used and then compared with human classification via similarity judgments. Confusion matrices for similarity judgments from these comparisons are assessed for all possible pairs of actions. The preliminary results show some overlap between the outcomes of the two analyses. We discuss the extent of the consistency of the different algorithms with human action categorization as a way to model action perception.
  •  
4.
  • Nair, Vipul, et al. (författare)
  • Action similarity judgment based on kinematic primitives
  • 2020
  • Ingår i: 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob). - : IEEE. - 9781728173061 - 9781728173207
  • Konferensbidrag (refereegranskat)abstract
    • Understanding which features humans rely on - in visually recognizing action similarity is a crucial step towards a clearer picture of human action perception from a learning and developmental perspective. In the present work, we investigate to which extent a computational model based on kinematics can determine action similarity and how its performance relates to human similarity judgments of the same actions. To this aim, twelve participants perform an action similarity task, and their performances are compared to that of a computational model solving the same task. The chosen model has its roots in developmental robotics and performs action classification based on learned kinematic primitives. The comparative experiment results show that both the model and human participants can reliably identify whether two actions are the same or not. However, the model produces more false hits and has a greater selection bias than human participants. A possible reason for this is the particular sensitivity of the model towards kinematic primitives of the presented actions. In a second experiment, human participants' performance on an action identification task indicated that they relied solely on kinematic information rather than on action semantics. The results show that both the model and human performance are highly accurate in an action similarity task based on kinematic-level features, which can provide an essential basis for classifying human actions.
  •  
5.
  • Nair, Vipul, et al. (författare)
  • Anticipatory Instances In Films : What Do They Tell Us About Event Understanding?
  • 2022
  • Konferensbidrag (refereegranskat)abstract
    • Event perception research highlights the significance of visuospatial attributes that influence event segmentation and prediction. The present study investigates how the visuospatial attributes in film events correlate to viewers’ ongoing event processes such as anticipatory gaze, prediction and segmentation. We derive film instances (such as occlusion, enter/exit, turn towards etc.) that show trends of (high) anticipatory viewing behaviour from an in-depth multimodal (such as speech, handaction, gaze etc.) event features analysis of 25 movie scenes and correlated with visual attention analysis (eye-tracking 32 participants per scene). The first results provide a solid basis for using these derived instances to examine further the nature of the different visuospatial attributes in relation to event changes (where anticipation and segmentation occurs). With the results, we (aim to) argue that by investigating film instances of anticipatory nature, one could explicate how humans perform high-level characterization of visuospatial attributes and understand events.
  •  
6.
  • Nair, Vipul, et al. (författare)
  • Attentional Synchrony in Films : A Window to Visuospatial Characterization of Events
  • 2022
  • Ingår i: Proceedings SAP 2022. - New York, NY, USA : Association for Computing Machinery. - 9781450394550
  • Konferensbidrag (refereegranskat)abstract
    • The study of event perception emphasizes the importance of visuospatial attributes in everyday human activities and how they influence event segmentation, prediction and retrieval. Attending to these visuospatial attributes is the first step toward event understanding, and therefore correlating attentional measures to such attributes would help to further our understanding of event comprehension. In this study, we focus on attentional synchrony amongst other attentional measures and analyze select film scenes through the lens of a visuospatial event model. Here we present the first results of an in-depth multimodal (such as head-turn, hand-action etc.) visuospatial analysis of 10 movie scenes correlated with visual attention (eye-tracking 32 participants per scene). With the results, we tease apart event segments of high and low attentional synchrony and describe the distribution of attention in relation to the visuospatial features. This analysis gives us an indirect measure of attentional saliency for a scene with a particular visuospatial complexity, ultimately directing the attentional selection of the observers in a given context.
  •  
7.
  • Nair, Vipul (författare)
  • Estimating action similarities- from human perception to computational model
  • 2019
  • Ingår i: 15th SweCog Conference. - Skövde : University of Skövde. - 9789198366754 ; , s. 7-7
  • Konferensbidrag (refereegranskat)abstract
    • This study investigates human perception of action similarities, where we explore patterns or clusters of similarities (similar actions)? Moreover, if there are clusters, what are the salient visual features of the action that humans rely upon? Insights to these questions are helpful to devise computational models to create visual primitives for human motion segmentation and understanding. Such models would be advantageous in understanding a human-event scenario or a human-robot interaction setting, for the model would find the same action regularities salient, as would a human. To that extent, we study how humans judge similarities between different familiar human hand based actions. A total of nineteen commonly seen kitchen based hand actions (e.g., cutting bread, washing dish) are chosen as stimuli. Participants performed two psychophysical experiments, an action similarity judgment task (experiment 1) and action discrimination task (experiment 2). Human judgment data are analyzed to see for human similarity patterns. Additionally, similarity patterns from three different visual computing algorithms for motion understanding (low-level spatial and velocity features), are used to compare against the human judgment patterns, which shows some overlap. We discuss the similarity patterns as a way to model action-perception that builds on action primitives.
  •  
8.
  • Nair, Vipul, et al. (författare)
  • Event segmentation through the lens of multimodal interaction
  • 2021
  • Ingår i: Proceedings of the 8th International Conference on Spatial Cognition. - : Springer. ; , s. 61-62
  • Konferensbidrag (refereegranskat)abstract
    • Research in naturalistic event perception highlights the significance of visuospatial attributes pertaining to everyday embodied human interaction. This research focuses on developing a conceptual cognitive model to characterise the role of multimodality in human interaction, its influence on visuospatial representation, event segmentation, and high-level event prediction.Our research aims to characterise the influence of modalities such as visual attention, speech, hand-action, body-pose, headmovement, spatial-position, motion, and gaze on judging event segments. Our particular focus is on visuoauditory narrative media. We select 25 movie scenes from a larger project concerning cognitive film/media studies and performing detailed multimodal analysis in the backdrop of an elaborate (formally specified) event analysis ontology. Corresponding to the semantic event analysis of each scene, we also perform high-level visual attention analysis (eye-tracking based) with 32 participants per scene. Correlating the features of each scene with visual attention constitutes the key method that we utilise in our analysis.We hypothesise that the attentional performance on event segments reflects the influence exerted by multimodal cues on event segmentation and prediction, thereby enabling us to explicate the representational basis of events. The first results show trends of multiple viewing behaviours such as attentional synchrony, gaze pursuit and attentional saliency towards human faces.Work is presently in progress, further investigating the role of visuospatial/auditory cues in high-level event perception, e.g., involving anticipatory gaze vis-a-vis event prediction. Applications and impact of this conceptual cognitive model and its behavioural outcomes are aplenty in domains such as (digital) narrative media design and social robotics.
  •  
9.
  • Nair, Vipul, et al. (författare)
  • Incidental processing of biological motion: : Effects of orientation, local-motion and global-form features
  • 2018
  • Konferensbidrag (refereegranskat)abstract
    • Previous studies on biological motion perception indicate that the processing of biological motion is fast and automatic. A segment of these studies has shown that task irrelevant and to-be-ignored biological figures are incidentally processed since they interfere with the main task. However more evidence is needed to understand the role of local-motion and global-form processing mechanisms in incidentally processed biological figures. This study investigates the effects of local-motion and global-form features on incidental processing. Point light walkers (PLW) were used in a flanker paradigm in a direction discrimination task to assess the influence of the flankers. Our results show that upright oriented PLW flankers with global-form features have more influence on visual processing of the central PLW than inverted or scrambled PLW flankers with only local-motion features.
  •  
10.
  • Nair, Vipul, et al. (författare)
  • Kinematic primitives in action similarity judgments : A human-centered computational model
  • 2023
  • Ingår i: IEEE Transactions on Cognitive and Developmental Systems. - : IEEE. - 2379-8920 .- 2379-8939. ; 15:4, s. 1981-1992
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper investigates the role that kinematic features play in human action similarity judgments. The results of three experiments with human participants are compared with the computational model that solves the same task. The chosen model has its roots in developmental robotics and performs action classification based on learned kinematic primitives. The comparative experimental results show that both model and human participants can reliably identify whether two actions are the same or not. Specifically, most of the given actions could be similarity judged based on very limited information from a single feature domain (velocity or spatial). Both velocity and spatial features were however necessary to reach a level of human performance on evaluated actions. The experimental results also show that human performance on an action identification task indicated that they clearly relied on kinematic information rather than on action semantics. The results show that both the model and human performance are highly accurate in an action similarity task based on kinematic-level features, which can provide an essential basis for classifying human actions. 
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 11

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy