SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Hemeren Paul) "

Search: WFRF:(Hemeren Paul)

  • Result 1-44 of 44
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Drejing, Karl, 1988-, et al. (author)
  • Engagement: A traceable motivational concept in human-robot interaction
  • 2015
  • In: Affective Computing and Intelligent Interaction (ACII), 2015 International Conference on. - : IEEE Computer Society. - 9781479999538 ; , s. 956-961
  • Conference paper (peer-reviewed)abstract
    • Engagement is essential to meaningful social interaction between humans. Understanding the mechanisms by which we detect engagement of other humans can help us understand how we can build robots that interact socially with humans. However, there is currently a lack of measurable engagement constructs on which to build an artificial system that can reliably support social interaction between humans and robots. This paper proposes a definition, based on motivation theories, and outlines a framework to explore the idea that engagement can be seen as specific behaviors and their attached magnitude or intensity. This is done by the use of data from multiple sources such as observer ratings, kinematic data, audio and outcomes of interactions. We use the domain of human-robot interaction in order to illustrate the application of this approach. The framework further suggests a method to gather and aggregate this data. If certain behaviors and their attached intensities co-occur with various levels of judged engagement, then engagement could be assessed by this framework consequently making it accessible to a robotic platform. This framework could improve the social capabilities of interactive agents by adding the ability to notice when and why an agent becomes disengaged, thereby providing the interactive agent with an ability to reengage him or her. We illustrate and propose validation of our framework with an example from robot-assisted therapy for children with autism spectrum disorder. The framework also represents a general approach that can be applied to other social interactive settings between humans and robots, such as interactions with elderly people.
  •  
2.
  • Hemeren, Paul, et al. (author)
  • A Framework for Representing Action Meaning in Artificial Systems via Force Dimensions
  • 2012
  • In: Artificial General Intelligence. - Heidelberg, Dordrecht, London, New York : Springer Berlin/Heidelberg. - 9783642355059 - 9783642355066 ; , s. 99-106, s. 99-99
  • Conference paper (peer-reviewed)abstract
    • General (human) intelligence critically includes understanding human action, both action production and action recognition. Human actions also convey social signals that allow us to predict the actions of others (intent) as well as the physical and social consequences of our actions. What's more, we are able to talk about what we (and others) are doing. We present a framework for action recognition and communication that is based on access to the force dimensions that constrain human actions. The central idea here is that forces and force patterns constitute vectors in conceptual spaces that can represent actions and events. We conclude by pointing to the consequences of this view for how artificial systms could be made to understand and communicate bout actions.
  •  
3.
  • Hemeren, Paul, et al. (author)
  • Actions, intentions and environmental constraints in biological motion perception
  • 2018
  • In: Spatial Cognition in a Multimedia and Intercultural World. - : Springer. ; , s. S8-S8
  • Conference paper (peer-reviewed)abstract
    • In many ways, human cognition is importantly predictive. We predict the sensory consequences of our own actions, but we also predict, and react to, the sensory consequences of how others experience their own actions. This ability extends to perceiving the intentions of other humans based on past and current actions. We present research results that show that social aspects and future movement patterns can be predicted from fairly simple kinematic patterns in biological motion sequences. The purpose of this presentation is to demonstrate and discuss the different environmental (gravity and perspective) and bodily constraints on understanding our social and movement-based interactions with others. In a series of experiments, we have used psychophysical methods and recordings from interactions with objects in natural settings. This includes experiments on the incidental processing of biological motion as well as driving simulator studies that examine the role of kinematic patterns of cyclists and driver’s accuracy to predict the cyclist’s intentions in traffic.  The results we present show both clear effects of “low-level” biological motion factors, such as opponent motion, on the incidental triggering of attention in basic perceptual tasks and “higher-lever” top-down guided perception in the intention prediction of cyclist behavior. We propose to use our results to stimulate discussion about the interplay between expectation mediated and stimulus driven effects of visual processing in spatial cognition the context of human interaction. Such discussion will include the role of context in gesture recognition and to what extent our visual system can handle visually complex environments.
  •  
4.
  •  
5.
  • Hemeren, Paul E., et al. (author)
  • Deriving motor primitives through action segmentation
  • 2011
  • In: Frontiers in Psychology. - : Frontiers Media S.A.. - 1664-1078. ; 1, s. 1-11
  • Journal article (peer-reviewed)abstract
    • The purpose of the present experiment is to further understand the effect of levels of processing (top-down vs. bottom-up) on the perception of movement kinematics and primitives for grasping actions in order to gain insight into possible primitives used by the mirror system. In the present study, we investigated the potential of identifying such primitives using an action segmentation task. Specifically, we investigated whether or not segmentation was driven primarily by the kinematics of the action, as opposed to high-level top-down information about the action and the object used in the action. Participants in the experiment were shown 12 point-light movies of object-centered hand/arm actions that were either presented in their canonical orientation together with the object in question (top-down condition) or upside down (inverted) without information about the object (bottom-up condition). The results show that (1) despite impaired high-level action recognition for the inverted actions participants were able to reliably segment the actions according to lower-level kinematic variables, (2) segmentation behavior in both groups was significantly related to the kinematic variables of change in direction, velocity, and acceleration of the wrist (thumb and finger tips) for most of the included actions. This indicates that top-down activation of an action representation leads to similar segmentation behavior for hand/arm actions compared to bottom-up, or local, visual processing when performing a fairly unconstrained segmentation task. Motor primitives as parts of more complex actions may therefore be reliably derived through visual segmentation based on movement kinematics.
  •  
6.
  •  
7.
  • Hemeren, Paul E. (author)
  • Frequency, ordinal position and semantic distance as measures of cross-cultural stability and hierarchies for action verbs
  • 1996
  • In: Acta Psychologica. - : Elsevier. - 0001-6918 .- 1873-6297. ; 91:1, s. 39-66
  • Journal article (peer-reviewed)abstract
    • Swedish and English (American) speaking subjects were given a superordinate description for a general class of actions that depict bodily movement. Based on a listing task similar to the one used in Battig and Montague (1969), the subjects were instructed to list all the actions that conformed to the superordinate. The results of the task indicate graded structure for the superordinate category as well as hierarchical relations between a basic and subordinate level as shown by measures of response frequencies and mean ordinal positions. These measures also correlated highly between the Swedish and American samples for the most frequently listed verbs, indicating a strong degree of cross-cultural stability. In an additional test of this stability, the ordinal positions of the verbs were used as proximity data in multidimensional scaling analyses in order to obtain a measure of the semantic distance between the different verbs. A correlation between the Swedish and American samples, using the derived distances for all possible pairs of the verbs, revealed a significant degree of stability. Furthermore, groupings of locomotory and vocal actions in the 3-dimensional multidimensional scaling solutions showed a tendency towards a much stronger stability. A speculative account of these results is proposed in terms of the physical constraints in human motion and the frequency of performing or seeing others perform actions around us.
  •  
8.
  • Hemeren, Paul E., et al. (author)
  • Lexicalization of natural actions and cross-linguistic stability
  • 2008
  • In: Proceedings of ISCA Tutorial and Research Workshop On Experimental Linguistics ExLing 2008. - : University of Athens. - 9789604660209 ; , s. 105-108
  • Conference paper (peer-reviewed)abstract
    • To what extent do Modern Greek, Polish, Swedish and American English similarly lexicalize action concepts, and how similar are the semantic associations between verbs denoting natural actions? Previous results indicate cross-linguistic stability between American English, Swedish, and Polish in verbs denoting basic human body movement, mouth movements, and sound production. The research reported here extends the cross-linguistic comparison to include Greek, which, unlike Polish, American English and Swedish, is a path-language. We used action imagery criteria to obtain lists of verbs from native Greek speakers. The data were analyzed by using multidimensional scaling, and the results were compared to those previously obtained.
  •  
9.
  • Hemeren, Paul, et al. (author)
  • Kinematic-based classification of social gestures and grasping by humans and machine learning techniques
  • 2021
  • In: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 8:308, s. 1-17
  • Journal article (peer-reviewed)abstract
    • The affective motion of humans conveys messages that other humans perceive and understand without conventional linguistic processing. This ability to classify human movement into meaningful gestures or segments plays also a critical role in creating social interaction between humans and robots. In the research presented here, grasping and social gesture recognition by humans and four machine learning techniques (k-Nearest Neighbor, Locality-Sensitive Hashing Forest, Random Forest and Support Vector Machine) is assessed by using human classification data as a reference for evaluating the classification performance of machine learning techniques for thirty hand/arm gestures. The gestures are rated according to the extent of grasping motion on one task and the extent to which the same gestures are perceived as social according to another task. The results indicate that humans clearly rate differently according to the two different tasks. The machine learning techniques provide a similar classification of the actions according to grasping kinematics and social quality. Furthermore, there is a strong association between gesture kinematics and judgments of grasping and the social quality of the hand/arm gestures. Our results support previous research on intention-from-movement understanding that demonstrates the reliance on kinematic information for perceiving the social aspects and intentions in different grasping actions as well as communicative point-light actions. 
  •  
10.
  •  
11.
  • Hemeren, Paul (author)
  • Mind in Action : Action Representation and the Perception of Biological Motion
  • 2008
  • Doctoral thesis (other academic/artistic)abstract
    • The ability to understand and communicate about the actions of others is a fundamental aspect of our daily activity. How can we talk about what others are doing? What qualities do different actions have such that they cause us to see them as being different or similar? What is the connection between what we see and the development of concepts and words or expressions for the things that we see? To what extent can two different people see and talk about the same things? Is there a common basis for our perception, and is there then a common basis for the concepts we form and the way in which the concepts become lexicalized in language? The broad purpose of this thesis is to relate aspects of perception, categorization and language to action recognition and conceptualization. This is achieved by empirically demonstrating a prototype structure for action categories and by revealing the effect this structure has on language via the semantic organization of verbs for natural actions. The results also show that implicit access to categorical information can affect the perceptual processing of basic actions. These findings indicate that our understanding of human actions is guided by the activation of high level information in the form of dynamic action templates or prototypes. More specifically, the first two empirical studies investigate the relation between perception and the hierarchical structure of action categories, i.e., subordinate, basic, and superordinate level action categories. Subjects generated lists of verbs based on perceptual criteria. Analyses based on multidimensional scaling showed a significant correlation for the semantic organization of a subset of the verbs for English and Swedish speaking subjects. Two additional experiments were performed in order to further determine the extent to which action categories exhibit graded structure, which would indicate the existence of prototypes for action categories. The results from typicality ratings and category verification showed that typicality judgments reliably predict category verification times for instances of different actions. Finally, the results from a repetition (short-term) priming paradigm suggest that high level information about the categorical differences between actions can be implicitly activated and facilitates the later visual processing of displays of biological motion. This facilitation occurs for upright displays, but appears to be lacking for displays that are shown upside down. These results show that the implicit activation of information about action categories can play a critical role in the perception of human actions.
  •  
12.
  • Hemeren, Paul (author)
  • Orientation specific effects of automatic access to categorical information in biological motion perception
  • 2005
  • In: Proceedings of the 27th Annual Conference of the Cognitive Science Society. - : Lawrence Erlbaum Associates. - 9780976831815 ; , s. 935-940
  • Conference paper (peer-reviewed)abstract
    • Previous findings from studies of biological motion perception suggest that access to stored high-level knowledge about action categories contributes to the fast identification of actions depicted in point-light displays of biological motion.Three priming experiments were conducted to investigate the automatic access to stored categorical level information in the visual processing of biological motion and the extent to which this access varies as a function of action orientation. The results show that activation of categorical level information occurs even when participants are given a task that does not require access to the categorical nature of the actions depicted in point-light displays. The results suggest that the visual processing of upright actions is indicative of Hochstein and Ahissar’s notion of vision at a glance, whereas inverted actions indicate vision with scrutiny.
  •  
13.
  •  
14.
  • Hemeren, Paul (author)
  • Reverse Hierarchy Theory and the Role of Kinematic Information in Semantic Level Processing and Intention Perception
  • 2019
  • Conference paper (peer-reviewed)abstract
    • In many ways, human cognition is importantly predictive (e.g., Clark, 2015). A critical source of information that humans use to anticipate the future actions of other humans and to perceive intentions is bodily movement (e.g., Ansuini et al., 2014; Becchio et al., 2018; Koul et al., 2019; Sciutti et al., 2015). This ability extends to perceiving the intentions of other humans based on past and current actions. The purpose of this abstract is to address the issue of anticipation according to levels of processing in visual perception and experimental results that demonstrate high-level semantic processing in the visual perception of various biological motion displays. These research results (Hemeren & Thill, 2011; Hemeren et al., 2018; Hemeren et al., 2016) show that social aspects and future movement patterns can be predicted from fairly simple kinematic patterns in biological motion sequences, which demonstrates the different environmental (gravity and perspective) and bodily constraints that contribute to understanding our social and movement-based interactions with others. Understanding how humans perceive anticipation and intention amongst one another should help us create artificial systems that also can perceive human anticipation and intention.
  •  
15.
  • Hemeren, Paul (author)
  • Signals for Active Safety Systems to Detect Cyclists and Their Intentions in Traffic
  • 2018
  • Conference paper (peer-reviewed)abstract
    • Objectives: Human cognition is importantly predictive. This predictive ability can also be applied to predict the future actions of cyclists in traffic. Active safety systems in (semi-)autonomous vehicles will likely need to detect and predict human actions occurring in different traffic situations.Results from two experiments demonstrate the effect of different patterns of human movement on predicting the behavior of the cyclists and the distance it takes drivers to detect the cyclists in a city environment. This research was carried out by observing recorded sequences on a computer but also in a driving simulator in order to include more naturalistic conditions and to achieve a high level of experimental control. As a complement to our previous research (Hemeren et al., 2014), we aimed to determine the distance at which drivers would detect and predict cyclists’ behavior.Methods: Participants in both experiments (90 participants in experiment 1 and 24 in experiment 2) observed video-recorded cyclists wearing three different patterns of reflective clothing (Fig. 1): biomotion, vest and the legal minimum requirement (legal), in which no reflector material was worn by the cyclists. In experiment 1, participants were instructed to predict if an approaching cyclist would make a left-turn or continue straight on when approaching a crossing. This task was also performed during daylight, dusk and at night. In the second experiment, participants in a driving simulator indicated (as a secondary task) when they saw a cyclist riding along the side of the road.Results: The biomotion reflective clothing led to a prediction accuracy of 88% for cyclists’ intentions at 9 meters before a crossing for the nighttime condition. For the legal minimum, the result was 59% and for the vest 67%. Detection distance in the driving simulator was also significantly greater for the biomotion condition compared to the legal and vest conditions. Visual detection is almost twice the distance for biomotion compared to the other two reflective clothing conditions.Conclusions: The results point to the critical role that biological motion can play on predicting the intention and detection of cyclists in traffic. This information can be used to inform (semi-)autonomous systems of human intentions in traffic.
  •  
16.
  • Hemeren, Paul, et al. (author)
  • Similarity Judgments of Hand-Based Actions : From Human Perception to a Computational Model
  • 2019
  • In: 42nd European Conference on Visual Perception (ECVP) 2019 Leuven. - : Sage Publications. ; , s. 79-79
  • Conference paper (peer-reviewed)abstract
    • How do humans perceive actions in relation to other similar actions? How can we develop artificial systems that can mirror this ability? This research uses human similarity judgments of point-light actions to evaluate the output from different visual computing algorithms for motion understanding, based on movement, spatial features, motion velocity, and curvature. The aim of the research is twofold: (a) to devise algorithms for motion segmentation into action primitives, which can then be used to build hierarchical representations for estimating action similarity and (b) to develop a better understanding of human actioncategorization in relation to judging action similarity. The long-term goal of the work is to allow an artificial system to recognize similar classes of actions, also across different viewpoints. To this purpose, computational methods for visual action classification are used and then compared with human classification via similarity judgments. Confusion matrices for similarity judgments from these comparisons are assessed for all possible pairs of actions. The preliminary results show some overlap between the outcomes of the two analyses. We discuss the extent of the consistency of the different algorithms with human action categorization as a way to model action perception.
  •  
17.
  •  
18.
  • Hemeren, Paul, et al. (author)
  • The Use of Visual Cues to Determine the Intent of Cyclists in Traffic
  • 2014
  • In: 2014 IEEE International Inter-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA). - : IEEE Press. - 9781479935635 - 9781479935642 ; , s. 47-51
  • Conference paper (peer-reviewed)abstract
    • The purpose of this research was to answer the following central questions: 1) How accurate are human observers at predicting the behavior of cyclists as the cyclists approached a crossing? 2) If the accuracy is reliably better than chance, what cues were used to make the predictions? 3) At what distance from the crossing did the most critical cues occur? 4) Can the cues be used in a model that can reliably predict cyclist intent? We present results that show a number of indicators that can be used in to predict the intention of a cyclist, i.e., future actions of a cyclist, e.g., “left turn” or “continue forward” etc.Results of empirical studies show that humans are reasonably good at this type of prediction for a majority of the situations studied. However, some situations seem to contain conflicting information. The results also suggested that human prediction of intention is to a large extent relying on a single “strong” indicator, e.g., that the cyclist makes a clear “head movement”. Several “weaker" indicators that together could be a strong “combined indicator”, or equivalently strong evidence, is likely to be missed or too complex to be handled by humans in real-time. We suggest this line of research can be used to create decision support systems that predict the behavior of cyclists in traffic.
  •  
19.
  • Hemeren, Paul, et al. (author)
  • The Visual Perception of Biological Motion in Adults
  • 2020
  • In: Modelling Human Motion. - Cham : Springer. - 9783030467319 - 9783030467326 ; , s. 53-71
  • Book chapter (peer-reviewed)abstract
    • This chapter presents research about the roles of different levels of visual processing and motor control on our ability to perceive biological motion produced by humans and by robots. The levels of visual processing addressed include high-level semantic processing of action prototypes based on global features as well as lower-level local processing based on kinematic features. A further important aspect concerns the interaction between these two levels of processing and the interaction between our own movement patterns and their impact on our visual perception of biological motion. The authors’ results from their research describe the conditions under which semantic and kinematic features influence one another in our understanding of human actions. In addition, results are presented to illustrate the claim that motor control and different levels of the visual perception of biological motion have clear consequences for human–robot interaction. Understanding the movement of robots is greatly facilitated by the movement that is consistent with the psychophysical constraints of Fitts’ law, minimum jerk and the two-thirds power law.
  •  
20.
  • Hemeren, Paul, et al. (author)
  • The walker congruency effect and incidental processing of configural and local features in point-light walkers
  • 2018
  • Conference paper (other academic/artistic)abstract
    • Two visual flanker experiments investigated the roles of configural and local opponent motion cues on the incidental processing of a point-light walker with diagonally configured limbs. Different flankers were used to determine the extent of interference on the visual processing of a central walker. Flankers (walkers) with diagonally configured limbs lacked the local opponent motion of the feet and hands, but contained configural information. Partially scrambled displays with intact opponent motion of the feet at the bottom of the display lacked configural information. These two conditions resulted in different effects of incidental processing. Configural information, without opponent motion, leads to changes in reaction time across flanker conditions, with no measurable congruency effect, while feet-based opponent motion causes a congruency effect without changes in reaction time across different flanker conditions. Life detection is a function of both sources of information.
  •  
21.
  • Hemeren, Paul (author)
  • To AIR is Human, or is it? : The Role of High-Level Representations and Conscious Awareness in Biological Motion Perception
  • 2019
  • Conference paper (peer-reviewed)abstract
    • The purpose of this research is to address the nature of high-level processing within visual perception. In particular, results from the visual processing of biological motion will be used to discuss the role of attention in high-level vision and visual consciousness. Original results from 3 priming experiments indicate “automatic” high-level semantic activation in biological motion perception. The view presented here is discussed in the context of Prinz’s (2000, 2003) AIR-theory. AIR stands for Attended Intermediate-level Representations and claims that visual consciousness resides at the level of intermediate-level representations. In contrast, the view presented here is that results from behavioral and neuroscientific studies of biological motion suggest that visual consciousness occurs at high cortical levels. Moreover, the Reverse Hierarchy Theory of Hochstein and Ahissar (2002) asserts that spread attention in high cortical areas is indicative of what they term “vision at a glance.” The gist of their theory is that explicit high-level visual processing involves initial feedforward mechanisms that implicitly follow a bottom-up hierarchical pathway. The end product of the processing, and the beginning of explicit visual perception, is conscious access to perceptual content in high-level cortical areas. Finally, I discuss the specific claims in AIR and present objections to Prinz’s arguments for why high-level visual processors are not good candidates for the locale of consciousness. In conclusion, the central claim of AIR with an emphasis on the connection between intermediate level representations and perceptual awareness seems to be too strong, and the arguments against high-level perceptual awareness are not convincing
  •  
22.
  • Hemeren, Paul, et al. (author)
  • URBANIST : Signaler som används för att avläsa cyklisters intentioner i trafiken
  • 2013
  • Reports (other academic/artistic)abstract
    • Genom att titta på ett fåtal bestämda signaler kan man med god träffsäkerhet förutsäga cyklisters beteende, vilket tyder på att de identifierade signalerna är betydelsefulla. Vetskapen om dessa kan, bland annat, praktiskt användas för att utveckla enkla hjälpmedel – såsom medveten placering av fluorescerande eller reflekterande material på leder och/eller införande av olikfärgade hjälmsidor. Dylika kan förväntas förstärka kommunikationen av viktiga signaler. Vetskapen kan även användas för att utbilda oerfarna bilförare. Båda fallen kan i förlängningen ge en säkrare trafikmiljö för oskyddade trafikanter.
  •  
23.
  • Hemeren, Paul, et al. (author)
  • Vad gör en kognitionsvetare?
  • 2012. - 1
  • In: Kognitionsvetenskap. - Lund. - 9789144051666 ; , s. 57-66
  • Book chapter (peer-reviewed)
  •  
24.
  •  
25.
  • Lagerstedt, Erik (author)
  • Perceiving agents : Pluralism, interaction, and existence
  • 2024
  • Doctoral thesis (other academic/artistic)abstract
    • Perception is a vast subject to study. One way to approach and study it might therefore be to break down the concept into smaller pieces. Specific modes of sensation, mechanisms, phenomena, or contexts might be selected as the proxy or starting point for addressing perception as a whole. Another approach would be to widen the concept, and attempt to study perception through the larger context of which it is a part. I have, in this thesis, attempted the latter strategy, by emphasising an existential perspective, and examine the role and nature of perception through that lens.The larger perspective of broadening the scope does not specifically allow for better answers, but rather different kinds of answers, providing complementary ways of exploring what it means to be an artificial or natural agent, and what consequences that can have for the access to, as well as representation, processing, and communication of information. A broader stance can also facilitate exploration of questions regarding larger perspectives, such as the relation between individual agents, as well as their place in larger structures such as societies and cyber-physical systems.In this thesis I use existential phenomenology to frame the concept of perception, while drawing from theories in biology and psychology. My work has a particular focus on human-robot interaction, a field of study at a fascinating intersection of humans designing, using, and communicating with something human-made, partially human-like, yet distinctly non-human. The work is also applied to some aspects of the traffic domain which, given the increasing interest in self-driving vehicles, is partially another instance of complex and naturalistic human-robot interaction.Ultimately, I argue for a pluralistic and pragmatic approach to the understanding of perception, and its related concepts. To understand a system of agents as they interact, it is not only necessary to acknowledge their respective circumstances, but take serious the idea that none of the agents’ constructed worlds are more or less real, they might only be more or less relevant in relation to specific contexts, perspectives, or needs. Such an approach is particularly relevant when addressing the complexities of the increasingly urgent sustainability challenges.
  •  
26.
  • Li, Cai, et al. (author)
  • k-Nearest-Neighbour based Numerical Hand Posture Recognition using a Smart Textile Glove
  • 2015
  • In: AMBIENT 2015. - : International Academy, Research and Industry Association (IARIA). - 9781612084213 ; , s. 36-41
  • Conference paper (peer-reviewed)abstract
    • In this article, the authors present an interdisciplinary project that illustrates the potential and challenges in dealing with electronic textiles as sensing devices. An interactive system consisting of a knitted sensor glove and electronic circuit and a numeric hand posture recognition algorithm based on k-nearestneighbour (kNN) is introduced. The design of the sensor glove itself is described, considering two sensitive fiber materials – piezoresistive and piezoelectric fibers – and the construction using an industrial knitting machine as well as the electronic setup is sketched out. Based on the characteristics of the textile sensors, a kNN technique based on a condensed dataset has been chosen to recognize hand postures indicating numbers from one to five from the sensor data. The authors describe two types of data condensation techniques (Reduced Nearest Neighbours and Fast Condensed Nearest Neighbours) in order to improve the data quality used by kNN, which are compared in terms of run time, condensation rate and recognition accuracy. Finally, the article gives an outlook on potential application scenarios for sensor gloves in pervasive computing.
  •  
27.
  •  
28.
  • Malmgren, Helge, 1945, et al. (author)
  • Begrepp och mentala representationer
  • 2012
  • In: Kognitionsvetenskap. - Lund : Studentlitteratur. - 9789144051666 ; , s. 175-190
  • Book chapter (peer-reviewed)
  •  
29.
  • Nair, Vipul, et al. (author)
  • Action similarity judgment based on kinematic primitives
  • 2020
  • In: 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob). - : IEEE. - 9781728173061 - 9781728173207
  • Conference paper (peer-reviewed)abstract
    • Understanding which features humans rely on - in visually recognizing action similarity is a crucial step towards a clearer picture of human action perception from a learning and developmental perspective. In the present work, we investigate to which extent a computational model based on kinematics can determine action similarity and how its performance relates to human similarity judgments of the same actions. To this aim, twelve participants perform an action similarity task, and their performances are compared to that of a computational model solving the same task. The chosen model has its roots in developmental robotics and performs action classification based on learned kinematic primitives. The comparative experiment results show that both the model and human participants can reliably identify whether two actions are the same or not. However, the model produces more false hits and has a greater selection bias than human participants. A possible reason for this is the particular sensitivity of the model towards kinematic primitives of the presented actions. In a second experiment, human participants' performance on an action identification task indicated that they relied solely on kinematic information rather than on action semantics. The results show that both the model and human performance are highly accurate in an action similarity task based on kinematic-level features, which can provide an essential basis for classifying human actions.
  •  
30.
  • Nair, Vipul, et al. (author)
  • Anticipatory Instances In Films : What Do They Tell Us About Event Understanding?
  • 2022
  • Conference paper (peer-reviewed)abstract
    • Event perception research highlights the significance of visuospatial attributes that influence event segmentation and prediction. The present study investigates how the visuospatial attributes in film events correlate to viewers’ ongoing event processes such as anticipatory gaze, prediction and segmentation. We derive film instances (such as occlusion, enter/exit, turn towards etc.) that show trends of (high) anticipatory viewing behaviour from an in-depth multimodal (such as speech, handaction, gaze etc.) event features analysis of 25 movie scenes and correlated with visual attention analysis (eye-tracking 32 participants per scene). The first results provide a solid basis for using these derived instances to examine further the nature of the different visuospatial attributes in relation to event changes (where anticipation and segmentation occurs). With the results, we (aim to) argue that by investigating film instances of anticipatory nature, one could explicate how humans perform high-level characterization of visuospatial attributes and understand events.
  •  
31.
  • Nair, Vipul, et al. (author)
  • Attentional Synchrony in Films : A Window to Visuospatial Characterization of Events
  • 2022
  • In: Proceedings SAP 2022. - New York, NY, USA : Association for Computing Machinery. - 9781450394550
  • Conference paper (peer-reviewed)abstract
    • The study of event perception emphasizes the importance of visuospatial attributes in everyday human activities and how they influence event segmentation, prediction and retrieval. Attending to these visuospatial attributes is the first step toward event understanding, and therefore correlating attentional measures to such attributes would help to further our understanding of event comprehension. In this study, we focus on attentional synchrony amongst other attentional measures and analyze select film scenes through the lens of a visuospatial event model. Here we present the first results of an in-depth multimodal (such as head-turn, hand-action etc.) visuospatial analysis of 10 movie scenes correlated with visual attention (eye-tracking 32 participants per scene). With the results, we tease apart event segments of high and low attentional synchrony and describe the distribution of attention in relation to the visuospatial features. This analysis gives us an indirect measure of attentional saliency for a scene with a particular visuospatial complexity, ultimately directing the attentional selection of the observers in a given context.
  •  
32.
  • Nair, Vipul, et al. (author)
  • Event segmentation through the lens of multimodal interaction
  • 2021
  • In: Proceedings of the 8th International Conference on Spatial Cognition. - : Springer. ; , s. 61-62
  • Conference paper (peer-reviewed)abstract
    • Research in naturalistic event perception highlights the significance of visuospatial attributes pertaining to everyday embodied human interaction. This research focuses on developing a conceptual cognitive model to characterise the role of multimodality in human interaction, its influence on visuospatial representation, event segmentation, and high-level event prediction.Our research aims to characterise the influence of modalities such as visual attention, speech, hand-action, body-pose, headmovement, spatial-position, motion, and gaze on judging event segments. Our particular focus is on visuoauditory narrative media. We select 25 movie scenes from a larger project concerning cognitive film/media studies and performing detailed multimodal analysis in the backdrop of an elaborate (formally specified) event analysis ontology. Corresponding to the semantic event analysis of each scene, we also perform high-level visual attention analysis (eye-tracking based) with 32 participants per scene. Correlating the features of each scene with visual attention constitutes the key method that we utilise in our analysis.We hypothesise that the attentional performance on event segments reflects the influence exerted by multimodal cues on event segmentation and prediction, thereby enabling us to explicate the representational basis of events. The first results show trends of multiple viewing behaviours such as attentional synchrony, gaze pursuit and attentional saliency towards human faces.Work is presently in progress, further investigating the role of visuospatial/auditory cues in high-level event perception, e.g., involving anticipatory gaze vis-a-vis event prediction. Applications and impact of this conceptual cognitive model and its behavioural outcomes are aplenty in domains such as (digital) narrative media design and social robotics.
  •  
33.
  • Nair, Vipul, et al. (author)
  • Incidental processing of biological motion: : Effects of orientation, local-motion and global-form features
  • 2018
  • Conference paper (peer-reviewed)abstract
    • Previous studies on biological motion perception indicate that the processing of biological motion is fast and automatic. A segment of these studies has shown that task irrelevant and to-be-ignored biological figures are incidentally processed since they interfere with the main task. However more evidence is needed to understand the role of local-motion and global-form processing mechanisms in incidentally processed biological figures. This study investigates the effects of local-motion and global-form features on incidental processing. Point light walkers (PLW) were used in a flanker paradigm in a direction discrimination task to assess the influence of the flankers. Our results show that upright oriented PLW flankers with global-form features have more influence on visual processing of the central PLW than inverted or scrambled PLW flankers with only local-motion features.
  •  
34.
  • Nair, Vipul, et al. (author)
  • Kinematic primitives in action similarity judgments : A human-centered computational model
  • 2023
  • In: IEEE Transactions on Cognitive and Developmental Systems. - : IEEE. - 2379-8920 .- 2379-8939. ; 15:4, s. 1981-1992
  • Journal article (peer-reviewed)abstract
    • This paper investigates the role that kinematic features play in human action similarity judgments. The results of three experiments with human participants are compared with the computational model that solves the same task. The chosen model has its roots in developmental robotics and performs action classification based on learned kinematic primitives. The comparative experimental results show that both model and human participants can reliably identify whether two actions are the same or not. Specifically, most of the given actions could be similarity judged based on very limited information from a single feature domain (velocity or spatial). Both velocity and spatial features were however necessary to reach a level of human performance on evaluated actions. The experimental results also show that human performance on an action identification task indicated that they clearly relied on kinematic information rather than on action semantics. The results show that both the model and human performance are highly accurate in an action similarity task based on kinematic-level features, which can provide an essential basis for classifying human actions. 
  •  
35.
  • Nair, Vipul (author)
  • The Observer Lens : Characterizing Visuospatial Features in Multimodal Interactions
  • 2024
  • Doctoral thesis (other academic/artistic)abstract
    • Understanding the intricate nature of human interactions relies heavily on the ability to discern and interpret inherent information. Central to this interpretation are the sensory features—visual, spatial, and auditory—collectively referred to as visuospatial in this thesis. The low-level (e.g., motion kinematics) and high-level (e.g., gestures and speech) visuospatial features significantly influence human perception, aiding in the deduction of intent, goals, emotions, and more. From a computational viewpoint, these features are crucial for interpreting events, from discerning body poses to evaluating action similarity, particularly for computational systems designed to interact closely with humans. This thesis examines the impact of visuospatial features on human event observation within an informatics context, concentrating on (1) Investigating the effect of visuospatial features on the observers’ perception; and (2) Aligning this investigation towards outcomes applicable to informatics.Taking a human-centric perspective, the thesis methodically probes the role of visuospatial features, drawing from prior cognitive research, underscoring the significance of features like action kinematics, gaze, turn-taking, and gestures in event comprehension. Balancing both reductionist and naturalistic perspectives, the research examines specific visuospatial features and their impact on the human's visual processing and attention mechanisms:- Visual Processing: It highlights the visual processing effects of action features, including kinematics, local-motion, and global form, as well as the role of factors like semantics and familiarity. These are demonstrated using human performance metrics in perceptual tasks and comparative analyses with selected computational models employing basic kinematic representations, revealing the adaptive nature of the visual system and enhancing Human Action Recognition models.- Visual Attention: Also highlights the attentional effects of interaction cues, such as speech, hand action, body pose, motion, and gaze, using the developed `Visuospatial Model'. This model presents a systematic approach for characterizing visuospatial features in everyday events, exemplified using a curated movie dataset and a newly developed comprehensive dataset of naturalistic-multimodal events.The findings emphasize the integration of behavioral and perceptual parameters with computationally aligned strategies, such as benchmarking perceptual tasks against human behavioral and psychophysical metrics, thereby providing a richer context for developing systematic tools and methodologies for multimodal event characterization. At its core, the thesis characterizes the role of visuospatial features in shaping human perception and its implications for the development of cognitive technologies equipped with autonomous perception and interaction capabilities -- essential for domains like social robotics, autonomous driving, media studies, traffic safety, and virtual characters.
  •  
36.
  •  
37.
  • Sun, Jiong, et al. (author)
  • Categories of touch : Classifying human touch using a soft tactile sensor
  • 2017
  • Conference paper (peer-reviewed)abstract
    • Social touch plays an important role not only in human communication but also in human-robot interaction. We here report results from an ongoing study on affective human-robot interaction. In our previous research, touch type is shown to be informative for communicated emotion. Here, a soft matrix array sensor is used to capture the tactile interaction between human and robot and a method based on PCA and kNN is applied in the experiment to classify different touch types, constituting a pre-stage to recognizing emotional tactile interaction. Results show an average recognition rate for classified touch type of 71%, with a large variability between different types of touch. Results are discussed in relation to affective HRI and social robotics.
  •  
38.
  • Sun, Jiong, et al. (author)
  • Tactile Interaction and Social Touch : Classifying Human Touch using a Soft Tactile Sensor
  • 2017
  • In: HAI '17. - New York : Association for Computing Machinery (ACM). - 9781450351133 ; , s. 523-526
  • Conference paper (peer-reviewed)abstract
    • This paper presents an ongoing study on affective human-robot interaction. In our previous research, touch type is shown to be informative for communicated emotion. Here, a soft matrix array sensor is used to capture the tactile interaction between human and robot and 6 machine learning methods including CNN, RNN and C3D are implemented to classify different touch types, constituting a pre-stage to recognizing emotional tactile interaction. Results show an average recognition rate of 95% by C3D for classified touch types, which provide stable classification results for developing social touch technology. 
  •  
39.
  • Thill, Serge, et al. (author)
  • Driver adherence to recommendations from support systems improves if the systems explain why they are given : A simulator study
  • 2018
  • In: Transportation Research Part F. - : Elsevier BV. - 1369-8478 .- 1873-5517. ; 56, s. 420-435
  • Journal article (peer-reviewed)abstract
    • This paper presents a large-scale simulator study on driver adherence to recommendations given by driver support systems, specifically eco-driving support and navigation support. 123 participants took part in this study, and drove a vehicle simulator through a pre-defined environment for a duration of approximately 10 min. Depending on the experimental condition, participants were either given no eco-driving recommendations, or a system whose provided support was either basic (recommendations were given in the form of an icon displayed in a manner that simulates a heads-up display) or informative (the system additionally displayed a line of text justifying its recommendations). A navigation system that likewise provided either basic or informative support, depending on the condition, was also provided. Effects are measured in terms of estimated simulated fuel savings as well as engine braking/coasting behaviour and gear change efficiency. Results indicate improvements in all variables. In particular, participants who had the support of an eco-driving system spent a significantly higher proportion of the time coasting. Participants also changed gears at lower engine RPM when using an eco-driving support system, and significantly more so when the system provided justifications. Overall, the results support the notion that providing reasons why a support system puts forward a certain recommendation improves adherence to it over mere presentation of the recommendation. Finally, results indicate that participants’ driving style was less eco-friendly if the navigation system provided justifications but the eco-system did not. This may be due to participants considering the two systems as one whole rather than separate entities with individual merits. This has implications for how to design and evaluate a given driver support system since its effectiveness may depend on the performance of other systems in the vehicle.
  •  
40.
  •  
41.
  • Thill, Serge, et al. (author)
  • Prediction of human action segmentation based on end-effector kinematics using linear models
  • 2011
  • In: European Perspectives on Cognitive Science. - Sofia : New Bulgarian University Press. - 9789545356605
  • Conference paper (peer-reviewed)abstract
    • The work presented in this paper builds on previous research which analysed human action segmentation in the case of simple object manipulations with the hand (rather than larger-scale actions). When designing algorithms to segment observed actions, for instance to train robots by imitation, the typical approach involves non-linear models but it is less clear whether human action segmentation is also based on such analyses. In the present paper, we therefore explore (1) whether linear models built from observed kinematic variables of a human hand can accurately predict human action segmentation and (2) what kinematic variables are the most important in such a task. In previous work, we recorded speed, acceleration and change in direction for the wrist and the tip of each of the five fingers during the execution of actions as well as the segmentation of these actions into individual components by humans. Here, we use this data to train a large number of models based on every possible training set available and find that, amongst others, the speed of the wrist as well as the change in direction of the index finger were preferred in models with good performance. Overall, the best models achieved R2 values over 0.5 on novel test data but the average performance of trained models was modest. We suggest that this is due to a suboptimal training set (which was not specifically designed for the present task) and that further work be carried out to identify better training sets as our initial results indicate that linear models may indeed be a viable approach to predicting human action segmentation.
  •  
42.
  • Thill, Serge, et al. (author)
  • The apparent intelligence of a system as a factor in situation awareness
  • 2014
  • In: 2014 IEEE International Inter-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support, CogSIMA 2014. - : IEEE Computer Society. - 9781479935642 - 9781479935635 ; , s. 52-58
  • Conference paper (peer-reviewed)abstract
    • In the context of interactive and automated vehicles, driver situation awareness becomes an increasingly important consideration for future traffic systems, whether it concerns the current status of the vehicle or the surrounding environment. Here, we present a simulator study investigating whether the apparent intelligence - i.e. intelligence as perceived by the driver, which is distinct from how intelligent a designer might think the system is - of a vehicle is a factor in the expectations and behaviour of the driver. We are specifically interested in perceived intelligence as a factor in situation awareness. To this end, the study modulates both traffic conditions and the type of navigational assistance given in a goal-navigation task to influence participant's perception of the system. Our result show two distinct effects relevant to situation awareness: 1) Participants who think the vehicle is highly intelligent spend more time glancing at the surrounding environment through the left door window than those who rank intelligence low and 2) participants prefer an awareness of why the navigation aid decided for specific directions but are sensitive to the manner it is presented. Our results have broader implications for the design of future automated systems in vehicles. 
  •  
43.
  • Vernon, David, 1958-, et al. (author)
  • An Architecture-oriented Approach to System Integration in Collaborative Robotics Research Projects : An Experience Report
  • 2015
  • In: Journal of Software Engineering for Robotics. - Dalmine. - 2035-3928. ; 6:1, s. 15-32
  • Journal article (peer-reviewed)abstract
    • Effective system integration requires strict adherence to strong software engineering standards, a practice not much favoured in many collaborative research projects. We argue that component-based software engineering (CBSE) provides a way to overcome this problem because it provides flexibility for developers while requiring the adoption of only a modest number of software engineering practices. This focus on integration complements software re-use, the more usual motivation for adopting CBSE. We illustrate our argument by showing how a large-scale system architecture for an application in the domain of robot-enhanced therapy for children with autism spectrum disorder (ASD) has been implemented. We highlight the manner in which the integration process is facilitated by the architecture implementation of a set of placeholder components that comprise stubs for all functional primitives, as well as the complete implementation of all inter-component communications. We focus on the component-port-connector meta-model and show that the YARP robot platform is a well-matched middleware framework for the implementation of this model. To facilitate the validation of port-connector communication, we configure the initial placeholder implementation of the system architecture as a discrete event simulation and control the invocation of each component’s stub primitives probabilistically. This allows the system integrator to adjust the rate of inter-component communication while respecting its asynchronous and concurrent character. Also, individual ports and connectors can be periodically selected as the simulator cycles through each primitive in each sub-system component. This ability to control the rate of connector communication considerably eases the task of validating component-port-connector behaviour in a large system. Ultimately, over and above its well-accepted benefits for software re-use in robotics, CBSE strikes a good balance between software engineering best practice and the socio-technical problem of managing effective integration in collaborative robotics research projects. 
  •  
44.
  • Veto, Peter, et al. (author)
  • Incidental and non-incidental processing of biological motion : Orientation, attention and life detection
  • 2013
  • In: Cooperative Minds: Social Interaction and Group Dynamics. - : Cognitive Science Society, Inc.. - 9780976831891 ; , s. 1528-1533
  • Conference paper (peer-reviewed)abstract
    • Based on the unique traits of biological motion perception, the existence of a “life detector”, a special sensitivity to perceiving motion patterns typical for animals, seems to be plausible (Johnson, 2006). Showing motion displays upside-down or with changes in global structure is known to disturb processing in different ways, but not much is known yet about how inversion affects attention and incidental processing. To examine the perception of upright and inverted point-light walkers regarding incidental processing, we used a flanker paradigm (Eriksen & Eriksen, 1974) adapted for biological motion (Thornton & Vuong, 2004), and extended it to include inverted and scrambled figures. Results show that inverted walkers do not evoke incidental processing and they allow high accuracy in performance only when attentional capacities are not diminished. An asymmetrical interaction between upright and inverted figures is found which alludes to qualitatively different pathways of processing.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-44 of 44
Type of publication
conference paper (28)
journal article (6)
book chapter (4)
doctoral thesis (3)
editorial collection (1)
reports (1)
show more...
research review (1)
show less...
Type of content
peer-reviewed (37)
other academic/artistic (6)
pop. science, debate, etc. (1)
Author/Editor
Hemeren, Paul (37)
Thill, Serge (10)
Nair, Vipul (9)
Veto, Peter (6)
Eriksson, Fredrik (5)
Högberg, Dan (5)
show more...
Hemeren, Paul E. (5)
Bhatt, Mehul, Profes ... (4)
Johannesson, Mikael (3)
Lebram, Mikael (3)
Lebram, Mikael, 1970 ... (3)
Suchan, Jakob (3)
Sciutti, Alessandra (3)
Sandini, Giulio (3)
Sun, Jiong (3)
Nicora, Elena (3)
Vignolo, Alessia (3)
Nilsson, Maria (2)
Zhou, Bo (2)
Nierstrasz, Vincent (2)
Billing, Erik, 1981- (2)
Duran, Boris (2)
Johannesson, Mikael, ... (2)
Billing, Erik (2)
Gawronska, Barbara (2)
Billing, Erik, PhD, ... (2)
Bredies, Katharina (2)
Lagerstedt, Erik (2)
Drejing, Karl, 1988- (2)
Gärdenfors, Peter (1)
Seoane, Fernando, 19 ... (1)
Seoane, Fernando (1)
Rybarczyk, Yves (1)
Ziemke, Tom (1)
Alklind Taylor, Anna ... (1)
Malmgren, Helge, 194 ... (1)
Svensson, Henrik (1)
Habibovic, Azra (1)
Klingegård, Maria (1)
Riveiro, Maria, 1978 ... (1)
Riveiro, Maria, Prof ... (1)
Bach, J. (1)
Vernon, David, 1958- (1)
Haglund, Björn, 1944 (1)
Lund, Anja (1)
Lund, Anja, 1971 (1)
Goeertzel, B. (1)
Ilke, M. (1)
Kasviki, Sofia (1)
Cai, Li (1)
show less...
University
University of Skövde (42)
Örebro University (3)
Lund University (2)
University of Borås (2)
RISE (2)
University of Gothenburg (1)
show more...
Linköping University (1)
Jönköping University (1)
show less...
Language
English (40)
Swedish (4)
Research subject (UKÄ/SCB)
Natural sciences (23)
Social Sciences (19)
Engineering and Technology (13)
Humanities (5)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view