SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:2296 9144 "

Sökning: L773:2296 9144

  • Resultat 1-50 av 60
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Arriola-Rios, Veronica E., et al. (författare)
  • Modeling of Deformable Objects for Robotic Manipulation : A Tutorial and Review
  • 2020
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 7
  • Forskningsöversikt (refereegranskat)abstract
    • Manipulation of deformable objects has given rise to an important set of open problems in the field of robotics. Application areas include robotic surgery, household robotics, manufacturing, logistics, and agriculture, to name a few. Related research problems span modeling and estimation of an object's shape, estimation of an object's material properties, such as elasticity and plasticity, object tracking and state estimation during manipulation, and manipulation planning and control. In this survey article, we start by providing a tutorial on foundational aspects of models of shape and shape dynamics. We then use this as the basis for a review of existing work on learning and estimation of these models and on motion planning and control to achieve desired deformations. We also discuss potential future lines of work.
  •  
2.
  • Balkenius, Christian, et al. (författare)
  • From focused thought to reveries : A memory system for a conscious robot
  • 2018
  • Ingår i: Frontiers in robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 5:APR
  • Tidskriftsartikel (refereegranskat)abstract
    • We introduce a memory model for robots that can account for many aspects of an inner world, ranging from object permanence, episodic memory, and planning to imagination and reveries. It is modeled after neurophysiological data and includes parts of the cerebral cortex together with models of arousal systems that are relevant for consciousness. The three central components are an identification network, a localization network, and a working memory network. Attention serves as the interface between the inner and the external world. It directs the flow of information from sensory organs to memory, as well as controlling top-down influences on perception. It also compares external sensations to internal top-down expectations. The model is tested in a number of computer simulations that illustrate how it can operate as a component in various cognitive tasks including perception, the A-not-B test, delayed matching to sample, episodic recall, and vicarious trial and error.
  •  
3.
  • Bartlett, Madeleine, et al. (författare)
  • What Can You See? : Identifying Cues on Internal States From the Movements of Natural Social Interactions
  • 2019
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Research Foundation. - 2296-9144. ; 6:49
  • Tidskriftsartikel (refereegranskat)abstract
    • In recent years, the field of Human-Robot Interaction (HRI) has seen an increasingdemand for technologies that can recognize and adapt to human behaviors and internalstates (e.g., emotions and intentions). Psychological research suggests that humanmovements are important for inferring internal states. There is, however, a need to betterunderstand what kind of information can be extracted from movement data, particularlyin unconstrained, natural interactions. The present study examines which internal statesand social constructs humans identify from movement in naturalistic social interactions.Participants either viewed clips of the full scene or processed versions of it displaying2D positional data. Then, they were asked to fill out questionnaires assessing their socialperception of the viewed material. We analyzed whether the full scene clips were moreinformative than the 2D positional data clips. First, we calculated the inter-rater agreementbetween participants in both conditions. Then, we employed machine learning classifiersto predict the internal states of the individuals in the videos based on the ratingsobtained. Although we found a higher inter-rater agreement for full scenes comparedto positional data, the level of agreement in the latter case was still above chance,thus demonstrating that the internal states and social constructs under study wereidentifiable in both conditions. A factor analysis run on participants’ responses showedthat participants identified the constructs interaction imbalance, interaction valence andengagement regardless of video condition. The machine learning classifiers achieveda similar performance in both conditions, again supporting the idea that movementalone carries relevant information. Overall, our results suggest it is reasonable to expecta machine learning algorithm, and consequently a robot, to successfully decode andclassify a range of internal states and social constructs using low-dimensional data (suchas the movements and poses of observed individuals) as input.
  •  
4.
  • Billing, Erik, 1981-, et al. (författare)
  • Finding Your Way from the Bed to the Kitchen: Reenacting and Recombining Sensorimotor Episodes Learned from Human Demonstration
  • 2016
  • Ingår i: Frontiers in Robotics and Ai. - Lausanne, Switzerland : Frontiers Media SA. - 2296-9144. ; 3
  • Tidskriftsartikel (refereegranskat)abstract
    • Several simulation theories have been proposed as an explanation for how humans and other agents internalize an "inner world" that allows them to simulate interactions with the external real world - prospectively and retrospectively. Such internal simulation of interaction with the environment has been argued to be a key mechanism behind mentalizing and planning. In the present work, we study internal simulations in a robot acting in a simulated human environment. A model of sensory-motor interactions with the environment is generated from human demonstrations and tested on a Robosoft Kompai robot. The model is used as a controller for the robot, reproducing the demonstrated behavior. Information from several different demonstrations is mixed, allowing the robot to produce novel paths through the environment, toward a goal specified by top-down contextual information. The robot model is also used in a covert mode, where the execution of actions is inhibited and perceptions are generated by a forward model. As a result, the robot generates an internal simulation of the sensory-motor interactions with the environment. Similar to the overt mode, the model is able to reproduce the demonstrated behavior as internal simulations. When experiences from several demonstrations are combined with a top-down goal signal, the system produces internal simulations of novel paths through the environment. These results can be understood as the robot imagining an "inner world" generated from previous experience, allowing it to try out different possible futures without executing actions overtly. We found that the success rate in terms of reaching the specified goal was higher during internal simulation, compared to overt action. These results are linked to a reduction in prediction errors generated during covert action. Despite the fact that the model is quite successful in terms of generating covert behavior toward specified goals, internal simulations display different temporal distributions compared to their overt counterparts. Links to human cognition and specifically mental imagery are discussed.
  •  
5.
  • Bimbo, Joao, et al. (författare)
  • Exploiting Robot Hand Compliance and Environmental Constraints for Edge Grasps
  • 2019
  • Ingår i: Frontiers in robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 6
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper presents a method to grasp objects that cannot be picked directly from a table, using a soft, underactuated hand. These grasps are achieved by dragging the object to the edge of a table, and grasping it from the protruding part, performing so-called slide-to-edge grasps. This type of approach, which uses the environment to facilitate the grasp, is named Environmental Constraint Exploitation (ECE), and has been shown to improve the robustness of grasps while reducing the planning effort. The paper proposes two strategies, namely Continuous Slide and Grasp and Pivot and Re-Grasp, that are designed to deal with different objects. In the first strategy, the hand is positioned over the object and assumed to stick to it during the sliding until the edge, where the fingers wrap around the object and pick it up. In the second strategy, instead, the sliding motion is performed using pivoting, and thus the object is allowed to rotate with respect to the hand that drags it toward the edge. Then, as soon as the object reaches the desired position, the hand detaches from the object and moves to grasp the object from the side. In both strategies, the hand positioning for grasping the object is implemented using a recently proposed functional model for soft hands, the closure signature, whereas the sliding motion on the table is executed by using a hybrid force-velocity controller. We conducted 320 grasping trials with 16 different objects using a soft hand attached to a collaborative robot arm. Experiments showed that the Continuous Slide and Grasp is more suitable for small objects (e.g., a credit card), whereas the Pivot and Re-Grasp performs better with larger objects (e.g., a big book). The gathered data were used to train a classifier that selects the most suitable strategy to use, according to the object size and weight. Implementing ECE strategies with soft hands is a first step toward their use in real-world scenarios, where the environment should be seen more as a help than as a hindrance.
  •  
6.
  • Brandão, Martim, et al. (författare)
  • Editorial : Responsible Robotics
  • 2022
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)
  •  
7.
  • Bütepage, Judith, et al. (författare)
  • Imitating by Generating : Deep Generative Models for Imitation of Interactive Tasks
  • 2020
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 7
  • Tidskriftsartikel (refereegranskat)abstract
    • To coordinate actions with an interaction partner requires a constant exchange of sensorimotor signals. Humans acquire these skills in infancy and early childhood mostly by imitation learning and active engagement with a skilled partner. They require the ability to predict and adapt to one's partner during an interaction. In this work we want to explore these ideas in a human-robot interaction setting in which a robot is required to learn interactive tasks from a combination of observational and kinesthetic learning. To this end, we propose a deep learning framework consisting of a number of components for (1) human and robot motion embedding, (2) motion prediction of the human partner, and (3) generation of robot joint trajectories matching the human motion. As long-term motion prediction methods often suffer from the problem of regression to the mean, our technical contribution here is a novel probabilistic latent variable model which does not predict in joint space but in latent space. To test the proposed method, we collect human-human interaction data and human-robot interaction data of four interactive tasks “hand-shake,” “hand-wave,” “parachute fist-bump,” and “rocket fist-bump.” We demonstrate experimentally the importance of predictive and adaptive components as well as low-level abstractions to successfully learn to imitate human behavior in interactive social tasks.
  •  
8.
  • Buyukgoz, Sera, et al. (författare)
  • Two ways to make your robot proactive : Reasoning about human intentions or reasoning about possible futures
  • 2022
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • Robots sharing their space with humans need to be proactive to be helpful. Proactive robots can act on their own initiatives in an anticipatory way to benefit humans. In this work, we investigate two ways to make robots proactive. One way is to recognize human intentions and to act to fulfill them, like opening the door that you are about to cross. The other way is to reason about possible future threats or opportunities and to act to prevent or to foster them, like recommending you to take an umbrella since rain has been forecast. In this article, we present approaches to realize these two types of proactive behavior. We then present an integrated system that can generate proactive robot behavior by reasoning on both factors: intentions and predictions. We illustrate our system on a sample use case including a domestic robot and a human. We first run this use case with the two separate proactive systems, intention-based and prediction-based, and then run it with our integrated system. The results show that the integrated system is able to consider a broader variety of aspects that are required for proactivity.
  •  
9.
  • Calvo Barajas, Natalia, 1988-, et al. (författare)
  • Hurry Up, We Need to Find the Key! How Regulatory Focus Design Affects Children's Trust in a Social Robot
  • 2021
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 8
  • Tidskriftsartikel (refereegranskat)abstract
    • In educational scenarios involving social robots, understanding the way robot behaviors affect children's motivation to achieve their learning goals is of vital importance. It is crucial for the formation of a trust relationship between the child and the robot so that the robot can effectively fulfill its role as a learning companion. In this study, we investigate the effect of a regulatory focus design scenario on the way children interact with a social robot. Regulatory focus theory is a type of self-regulation that involves specific strategies in pursuit of goals. It provides insights into how a person achieves a particular goal, either through a strategy focused on "promotion" that aims to achieve positive outcomes or through one focused on "prevention" that aims to avoid negative outcomes. In a user study, 69 children (7-9 years old) played a regulatory focus design goal-oriented collaborative game with the EMYS robot. We assessed children's perception of likability and competence and their trust in the robot, as well as their willingness to follow the robot's suggestions when pursuing a goal. Results showed that children perceived the prevention-focused robot as being more likable than the promotion-focused robot. We observed that a regulatory focus design did not directly affect trust. However, the perception of likability and competence was positively correlated with children's trust but negatively correlated with children's acceptance of the robot's suggestions.
  •  
10.
  • Chellapurath, Mrudul, et al. (författare)
  • Bioinspired robots can foster nature conservation
  • 2023
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • We live in a time of unprecedented scientific and human progress while being increasingly aware of its negative impacts on our planet’s health. Aerial, terrestrial, and aquatic ecosystems have significantly declined putting us on course to a sixth mass extinction event. Nonetheless, the advances made in science, engineering, and technology have given us the opportunity to reverse some of our ecosystem damage and preserve them through conservation efforts around the world. However, current conservation efforts are primarily human led with assistance from conventional robotic systems which limit their scope and effectiveness, along with negatively impacting the surroundings. In this perspective, we present the field of bioinspired robotics to develop versatile agents for future conservation efforts that can operate in the natural environment while minimizing the disturbance/impact to its inhabitants and the environment’s natural state. We provide an operational and environmental framework that should be considered while developing bioinspired robots for conservation. These considerations go beyond addressing the challenges of human-led conservation efforts and leverage the advancements in the field of materials, intelligence, and energy harvesting, to make bioinspired robots move and sense like animals. In doing so, it makes bioinspired robots an attractive, non-invasive, sustainable, and effective conservation tool for exploration, data collection, intervention, and maintenance tasks. Finally, we discuss the development of bioinspired robots in the context of collaboration, practicality, and applicability that would ensure their further development and widespread use to protect and preserve our natural world.
  •  
11.
  • Cooney, Martin, 1980-, et al. (författare)
  • PastVision+ : Thermovisual Inference of Recent Medicine Intake by Detecting Heated Objects and Cooled Lips
  • 2017
  • Ingår i: Frontiers in Robotics and AI. - Lausanne : Frontiers Media S.A.. - 2296-9144. ; 4
  • Tidskriftsartikel (refereegranskat)abstract
    • This article addresses the problem of how a robot can infer what a person has done recently, with a focus on checking oral medicine intake in dementia patients. We present PastVision+, an approach showing how thermovisual cues in objects and humans can be leveraged to infer recent unobserved human-object interactions. Our expectation is that this approach can provide enhanced speed and robustness compared to existing methods, because our approach can draw inferences from single images without needing to wait to observe ongoing actions and can deal with short-lasting occlusions; when combined, we expect a potential improvement in accuracy due to the extra information from knowing what a person has recently done. To evaluate our approach, we obtained some data in which an experimenter touched medicine packages and a glass of water to simulate intake of oral medicine, for a challenging scenario in which some touches were conducted in front of a warm background. Results were promising, with a detection accuracy of touched objects of 50% at the 15 s mark and 0% at the 60 s mark, and a detection accuracy of cooled lips of about 100 and 60% at the 15 s mark for cold and tepid water, respectively. Furthermore, we conducted a follow-up check for another challenging scenario in which some participants pretended to take medicine or otherwise touched a medicine package: accuracies of inferring object touches, mouth touches, and actions were 72.2, 80.3, and 58.3% initially, and 50.0, 81.7, and 50.0% at the 15 s mark, with a rate of 89.0% for person identification. The results suggested some areas in which further improvements would be possible, toward facilitating robot inference of human actions, in the context of medicine intake monitoring.
  •  
12.
  • Cooney, Martin, 1980- (författare)
  • Robot Art, in the Eye of the Beholder? : Personalized Metaphors Facilitate Communication of Emotions and Creativity
  • 2021
  • Ingår i: Frontiers in Robotics and AI. - Lausanne : Frontiers Media S.A.. - 2296-9144. ; 8
  • Tidskriftsartikel (refereegranskat)abstract
    • Socially assistive robots are being designed to support people's well-being in contexts such as art therapy where human therapists are scarce, by making art together with people in an appropriate way. A challenge is that various complex and idiosyncratic concepts relating to art, like emotions and creativity, are not yet well understood. Guided by the principles of speculative design, the current article describes the use of a collaborative prototyping approach involving artists and engineers to explore this design space, especially in regard to general and personalized art-making strategies. This led to identifying a goal: to generate representational or abstract art that connects emotionally with people's art and shows creativity. For this, an approach involving personalized "visual metaphors" was proposed, which balances the degree to which a robot's art is influenced by interacting persons. The results of a small user study via a survey provided further insight into people's perceptions: the general design was perceived as intended and appealed; as well, personalization via representational symbols appeared to lead to easier and clearer communication of emotions than via abstract symbols. In closing, the article describes a simplified demo, and discusses future challenges. Thus, the contribution of the current work lies in suggesting how a robot can seek to interact with people in an emotional and creative way through personalized art; thereby, the aim is to stimulate ideation in this promising area and facilitate acceptance of such robots in everyday human environments. © 2021 Cooney. 
  •  
13.
  • Coser, Omar, et al. (författare)
  • AI-based methodologies for exoskeleton-assisted rehabilitation of the lower limb : a review
  • 2024
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 11
  • Forskningsöversikt (refereegranskat)abstract
    • Over the past few years, there has been a noticeable surge in efforts to design novel tools and approaches that incorporate Artificial Intelligence (AI) into rehabilitation of persons with lower-limb impairments, using robotic exoskeletons. The potential benefits include the ability to implement personalized rehabilitation therapies by leveraging AI for robot control and data analysis, facilitating personalized feedback and guidance. Despite this, there is a current lack of literature review specifically focusing on AI applications in lower-limb rehabilitative robotics. To address this gap, our work aims at performing a review of 37 peer-reviewed papers. This review categorizes selected papers based on robotic application scenarios or AI methodologies. Additionally, it uniquely contributes by providing a detailed summary of input features, AI model performance, enrolled populations, exoskeletal systems used in the validation process, and specific tasks for each paper. The innovative aspect lies in offering a clear understanding of the suitability of different algorithms for specific tasks, intending to guide future developments and support informed decision-making in the realm of lower-limb exoskeleton and AI applications.
  •  
14.
  • Cumbal, Ronald, et al. (författare)
  • Stereotypical nationality representations in HRI : perspectives from international young adults
  • 2023
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • People often form immediate expectations about other people, or groups of people, based on visual appearance and characteristics of their voice and speech. These stereotypes, often inaccurate or overgeneralized, may translate to robots that carry human-like qualities. This study aims to explore if nationality-based preconceptions regarding appearance and accents can be found in people's perception of a virtual and a physical social robot. In an online survey with 80 subjects evaluating different first-language-influenced accents of English and nationality-influenced human-like faces for a virtual robot, we find that accents, in particular, lead to preconceptions on perceived competence and likeability that correspond to previous findings in social science research. In a physical interaction study with 74 participants, we then studied if the perception of competence and likeability is similar after interacting with a robot portraying one of four different nationality representations from the online survey. We find that preconceptions on national stereotypes that appeared in the online survey vanish or are overshadowed by factors related to general interaction quality. We do, however, find some effects of the robot's stereotypical alignment with the subject group, with Swedish subjects (the majority group in this study) rating the Swedish-accented robot as less competent than the international group, but, on the other hand, recalling more facts from the Swedish robot's presentation than the international group does. In an extension in which the physical robot was replaced by a virtual robot interacting in the same scenario online, we further found the same results that preconceptions are of less importance after actual interactions, hence demonstrating that the differences in the ratings of the robot between the online survey and the interaction is not due to the interaction medium. We hence conclude that attitudes towards stereotypical national representations in HRI have a weak effect, at least for the user group included in this study (primarily educated young students in an international setting).
  •  
15.
  • Das, Shemonto, et al. (författare)
  • Active learning strategies for robotic tactile texture recognition tasks
  • 2024
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 11
  • Tidskriftsartikel (refereegranskat)abstract
    • Accurate texture classification empowers robots to improve their perception and comprehension of the environment, enabling informed decision-making and appropriate responses to diverse materials and surfaces. Still, there are challenges for texture classification regarding the vast amount of time series data generated from robots’ sensors. For instance, robots are anticipated to leverage human feedback during interactions with the environment, particularly in cases of misclassification or uncertainty. With the diversity of objects and textures in daily activities, Active Learning (AL) can be employed to minimize the number of samples the robot needs to request from humans, streamlining the learning process. In the present work, we use AL to select the most informative samples for annotation, thus reducing the human labeling effort required to achieve high performance for classifying textures. We also use a sliding window strategy for extracting features from the sensor’s time series used in our experiments. Our multi-class dataset (e.g., 12 textures) challenges traditional AL strategies since standard techniques cannot control the number of instances per class selected to be labeled. Therefore, we propose a novel class-balancing instance selection algorithm that we integrate with standard AL strategies. Moreover, we evaluate the effect of sliding windows of two-time intervals (3 and 6 s) on our AL Strategies. Finally, we analyze in our experiments the performance of AL strategies, with and without the balancing algorithm, regarding f1-score, and positive effects are observed in terms of performance when using our proposed data pipeline. Our results show that the training data can be reduced to 70% using an AL strategy regardless of the machine learning model and reach, and in many cases, surpass a baseline performance. Finally, exploring the textures with a 6-s window achieves the best performance, and using either Extra Trees produces an average f1-score of 90.21% in the texture classification data set.
  •  
16.
  • Deichler, Anna, et al. (författare)
  • Learning to generate pointing gestures in situated embodied conversational agents
  • 2023
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • One of the main goals of robotics and intelligent agent research is to enable them to communicate with humans in physically situated settings. Human communication consists of both verbal and non-verbal modes. Recent studies in enabling communication for intelligent agents have focused on verbal modes, i.e., language and speech. However, in a situated setting the non-verbal mode is crucial for an agent to adapt flexible communication strategies. In this work, we focus on learning to generate non-verbal communicative expressions in situated embodied interactive agents. Specifically, we show that an agent can learn pointing gestures in a physically simulated environment through a combination of imitation and reinforcement learning that achieves high motion naturalness and high referential accuracy. We compared our proposed system against several baselines in both subjective and objective evaluations. The subjective evaluation is done in a virtual reality setting where an embodied referential game is played between the user and the agent in a shared 3D space, a setup that fully assesses the communicative capabilities of the generated gestures. The evaluations show that our model achieves a higher level of referential accuracy and motion naturalness compared to a state-of-the-art supervised learning motion synthesis model, showing the promise of our proposed system that combines imitation and reinforcement learning for generating communicative gestures. Additionally, our system is robust in a physically-simulated environment thus has the potential of being applied to robots.
  •  
17.
  • Dogan, Fethiye Irmak, et al. (författare)
  • Leveraging Explainability for Understanding Object Descriptions in Ambiguous 3D Environments
  • 2023
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • For effective human-robot collaboration, it is crucial for robots to understand requests from users perceiving the three-dimensional space and ask reasonable follow-up questions when there are ambiguities. While comprehending the users’ object descriptions in the requests, existing studies have focused on this challenge for limited object categories that can be detected or localized with existing object detection and localization modules. Further, they have mostly focused on comprehending the object descriptions using flat RGB images without considering the depth dimension. On the other hand, in the wild, it is impossible to limit the object categories that can be encountered during the interaction, and 3-dimensional space perception that includes depth information is fundamental in successful task completion. To understand described objects and resolve ambiguities in the wild, for the first time, we suggest a method leveraging explainability. Our method focuses on the active areas of an RGB scene to find the described objects without putting the previous constraints on object categories and natural language instructions. We further improve our method to identify the described objects considering depth dimension. We evaluate our method in varied real-world images and observe that the regions suggested by our method can help resolve ambiguities. When we compare our method with a state-of-the-art baseline, we show that our method performs better in scenes with ambiguous objects which cannot be recognized by existing object detectors. We also show that using depth features significantly improves performance in scenes where depth data is critical to disambiguate the objects and across our evaluation dataset that contains objects that can be specified with and without the depth dimension.
  •  
18.
  •  
19.
  • Engwall, Olov, et al. (författare)
  • Socio-cultural perception of robot backchannels
  • 2023
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • Introduction: Backchannels, i.e., short interjections by an interlocutor to indicate attention, understanding or agreement regarding utterances by another conversation participant, are fundamental in human-human interaction. Lack of backchannels or if they have unexpected timing or formulation may influence the conversation negatively, as misinterpretations regarding attention, understanding or agreement may occur. However, several studies over the years have shown that there may be cultural differences in how backchannels are provided and perceived and that these differences may affect intercultural conversations. Culturally aware robots must hence be endowed with the capability to detect and adapt to the way these conversational markers are used across different cultures. Traditionally, culture has been defined in terms of nationality, but this is more and more considered to be a stereotypic simplification. We therefore investigate several socio-cultural factors, such as the participants’ gender, age, first language, extroversion and familiarity with robots, that may be relevant for the perception of backchannels.Methods: We first cover existing research on cultural influence on backchannel formulation and perception in human-human interaction and on backchannel implementation in Human-Robot Interaction. We then present an experiment on second language spoken practice, in which we investigate how backchannels from the social robot Furhat influence interaction (investigated through speaking time ratios and ethnomethodology and multimodal conversation analysis) and impression of the robot (measured by post-session ratings). The experiment, made in a triad word game setting, is focused on if activity-adaptive robot backchannels may redistribute the participants’ speaking time ratio, and/or if the participants’ assessment of the robot is influenced by the backchannel strategy. The goal is to explore how robot backchannels should be adapted to different language learners to encourage their participation while being perceived as socio-culturally appropriate.Results: We find that a strategy that displays more backchannels towards a less active speaker may substantially decrease the difference in speaking time between the two speakers, that different socio-cultural groups respond differently to the robot’s backchannel strategy and that they also perceive the robot differently after the session.Discussion: We conclude that the robot may need different backchanneling strategies towards speakers from different socio-cultural groups in order to encourage them to speak and have a positive perception of the robot. 
  •  
20.
  • Fabricius, Victor, 1989-, et al. (författare)
  • Interactions Between Heavy Trucks and Vulnerable Road Users – A Systematic Review to Inform the Interactive Capabilities of Highly Automated Trucks
  • 2022
  • Ingår i: Frontiers in Robotics and AI. - Lausanne : Frontiers Media S.A.. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • This study investigates interactive behaviors and communication cues of heavy goods vehicles (HGVs) and vulnerable road users (VRUs) such as pedestrians and cyclists as a means of informing the interactive capabilities of highly automated HGVs. Following a general framing of road traffic interaction, we conducted a systematic literature review of empirical HGV-VRU studies found through the databases Scopus, ScienceDirect and TRID. We extracted reports of interactive road user behaviors and communication cues from 19 eligible studies and categorized these into two groups: 1) the associated communication channel/mechanism (e.g., nonverbal behavior), and 2) the type of communication cue (implicit/explicit). We found the following interactive behaviors and communication cues: 1) vehicle-centric (e.g., HGV as a larger vehicle, adapting trajectory, position relative to the VRU, timing of acceleration to pass the VRU, displaying information via human-machine interface), 2) driver-centric (e.g., professional driver, present inside/outside the cabin, eye-gaze behavior), and 3) VRU-centric (e.g., racer cyclist, adapting trajectory, position relative to the HGV, proximity to other VRUs, eye-gaze behavior). These cues are predominantly based on road user trajectories and movements (i.e., kinesics/proxemics nonverbal behavior) forming implicit communication, which indicates that this is the primary mechanism for HGV-VRU interactions. However, there are also reports of more explicit cues such as cyclists waving to say thanks, the use of turning indicators, or new types of external human-machine interfaces (eHMI). Compared to corresponding scenarios with light vehicles, HGV-VRU interaction patterns are to a high extent formed by the HGV’s size, shape and weight. For example, this can cause VRUs to feel less safe, drivers to seek to avoid unnecessary decelerations and accelerations, or lead to strategic behaviors due to larger blind-spots. Based on these findings, it is likely that road user trajectories and kinematic behaviors will form the basis for communication also for highly automated HGV-VRU interaction. However, it might also be beneficial to use additional eHMI to compensate for the loss of more social driver-centric cues or to signal other types of information. While controlled experiments can be used to gather such initial insights, deeper understanding of highly automated HGV-VRU interactions will also require naturalistic studies. © 2022 Fabricius, Habibovic, Rizgary, Andersson and Wärnestål.
  •  
21.
  • Felsberg, Michael, 1974-, et al. (författare)
  • Unbiased decoding of biologically motivated visual feature descriptors
  • 2015
  • Ingår i: Frontiers in Robotics and AI. - Lausanne, Switzerland : Frontiers Research Foundation. - 2296-9144. ; 2:20
  • Tidskriftsartikel (refereegranskat)abstract
    • Visual feature descriptors are essential elements in most computer and robot vision systems. They typically lead to an abstraction of the input data, images, or video, for further processing, such as clustering and machine learning. In clustering applications, the cluster center represents the prototypical descriptor of the cluster and estimates the corresponding signal value, such as color value or dominating flow orientation, by decoding the prototypical descriptor. Machine learning applications determine the relevance of respective descriptors and a visualization of the corresponding decoded information is very useful for the analysis of the learning algorithm. Thus decoding of feature descriptors is a relevant problem, frequently addressed in recent work. Also, the human brain represents sensorimotor information at a suitable abstraction level through varying activation of neuron populations. In previous work, computational models have been derived that agree with findings of neurophysiological experiments on the represen-tation of visual information by decoding the underlying signals. However, the represented variables have a bias toward centers or boundaries of the tuning curves. Despite the fact that feature descriptors in computer vision are motivated from neuroscience, the respec-tive decoding methods have been derived largely independent. From first principles, we derive unbiased decoding schemes for biologically motivated feature descriptors with a minimum amount of redundancy and suitable invariance properties. These descriptors establish a non-parametric density estimation of the underlying stochastic process with a particular algebraic structure. Based on the resulting algebraic constraints, we show formally how the decoding problem is formulated as an unbiased maximum likelihood estimator and we derive a recurrent inverse diffusion scheme to infer the dominating mode of the distribution. These methods are evaluated in experiments, where stationary points and bias from noisy image data are compared to existing methods.
  •  
22.
  • Fraune, Marlena R., et al. (författare)
  • Lessons Learned About Designing and Conducting Studies From HRI Experts
  • 2022
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 8
  • Tidskriftsartikel (refereegranskat)abstract
    • The field of human-robot interaction (HRI) research is multidisciplinary and requires researchers to understand diverse fields including computer science, engineering, informatics, philosophy, psychology, and more disciplines. However, it is hard to be an expert in everything. To help HRI researchers develop methodological skills, especially in areas that are relatively new to them, we conducted a virtual workshop, Workshop Your Study Design (WYSD), at the 2021 International Conference on HRI. In this workshop, we grouped participants with mentors, who are experts in areas like real-world studies, empirical lab studies, questionnaire design, interview, participatory design, and statistics. During and after the workshop, participants discussed their proposed study methods, obtained feedback, and improved their work accordingly. In this paper, we present 1) Workshop attendees' feedback about the workshop and 2) Lessons that the participants learned during their discussions with mentors. Participants' responses about the workshop were positive, and future scholars who wish to run such a workshop can consider implementing their suggestions. The main contribution of this paper is the lessons learned section, where the workshop participants contributed to forming this section based on what participants discovered during the workshop. We organize lessons learned into themes of 1) Improving study design for HRI, 2) How to work with participants - especially children -, 3) Making the most of the study and robot's limitations, and 4) How to collaborate well across fields as they were the areas of the papers submitted to the workshop. These themes include practical tips and guidelines to assist researchers to learn about fields of HRI research with which they have limited experience. We include specific examples, and researchers can adapt the tips and guidelines to their own areas to avoid some common mistakes and pitfalls in their research.
  •  
23.
  • Förster, Frank, et al. (författare)
  • Working with troubles and failures in conversation between humans and robots: workshop report
  • 2023
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper summarizes the structure and findings from the first Workshop on Troubles and Failures in Conversations between Humans and Robots. The workshop was organized to bring together a small, interdisciplinary group of researchers working on miscommunication from two complementary perspectives. One group of technology-oriented researchers was made up of roboticists, Human-Robot Interaction (HRI) researchers and dialogue system experts. The second group involved experts from conversation analysis, cognitive science, and linguistics. Uniting both groups of researchers is the belief that communication failures between humans and machines need to be taken seriously and that a systematic analysis of such failures may open fruitful avenues in research beyond current practices to improve such systems, including both speech-centric and multimodal interfaces. This workshop represents a starting point for this endeavour. The aim of the workshop was threefold: Firstly, to establish an interdisciplinary network of researchers that share a common interest in investigating communicative failures with a particular view towards robotic speech interfaces; secondly, to gain a partial overview of the “failure landscape” as experienced by roboticists and HRI researchers; and thirdly, to determine the potential for creating a robotic benchmark scenario for testing future speech interfaces with respect to the identified failures. The present article summarizes both the “failure landscape” surveyed during the workshop as well as the outcomes of the attempt to define a benchmark scenario.
  •  
24.
  • Güler, Püren, et al. (författare)
  • Visual state estimation in unseen environments through domain adaptation and metric learning
  • 2022
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • In robotics, deep learning models are used in many visual perception applications, including the tracking, detection and pose estimation of robotic manipulators. The state of the art methods however are conditioned on the availability of annotated training data, which may in practice be costly or even impossible to collect. Domain augmentation is one popular method to improve generalization to out-of-domain data by extending the training data set with predefined sources of variation, unrelated to the primary task. While this typically results in better performance on the target domain, it is not always clear that the trained models are capable to accurately separate the signals relevant to solving the task (e.g., appearance of an object of interest) from those associated with differences between the domains (e.g., lighting conditions). In this work we propose to improve the generalization capabilities of models trained with domain augmentation by formulating a secondary structured metric-space learning objective. We concentrate on one particularly challenging domain transfer task-visual state estimation for an articulated underground mining machine-and demonstrate the benefits of imposing structure on the encoding space. Our results indicate that the proposed method has the potential to transfer feature embeddings learned on the source domain, through a suitably designed augmentation procedure, and on to an unseen target domain.
  •  
25.
  • Guzhva, Oleksiy, et al. (författare)
  • Now you see me : Convolutional neural network based tracker for dairy cows
  • 2018
  • Ingår i: Frontiers in robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 5:SEP
  • Tidskriftsartikel (refereegranskat)abstract
    • To maintain dairy cattle health and welfare at commensurable levels, analysis of the behaviors occurring between cows should be performed. This type of behavioral analysis is highly dependent on reliable and robust tracking of individuals, for it to be viable and applicable on-site. In this article, we introduce a novel method for continuous tracking and data-marker based identification of individual cows based on convolutional neural networks (CNNs). The methodology for data acquisition and overall implementation of tracking/identification is described. The Region of Interest (ROI) for the recordings was limited to a waiting area with free entrances to four automatic milking stations and a total size of 6 × 18 meters. There were 252 Swedish Holstein cows during the time of study that had access to the waiting area at a conventional dairy barn with varying conditions and illumination. Three Axis M3006-V cameras placed in the ceiling at 3.6 meters height and providing top-down view were used for recordings. The total amount of video data collected was 4 months, containing 500 million frames. To evaluate the system two 1-h recordings were chosen. The exit time and gate-id found by the tracker for each cow were compared with the exit times produced by the gates. In total there were 26 tracks considered, and 23 were correctly tracked. Given those 26 starting points, the tracker was able to maintain the correct position in a total of 101.29 min or 225 s in average per starting point/individual cow. Experiments indicate that a cow could be tracked close to 4 min before failure cases emerge and that cows could be successfully tracked for over 20 min in mildly-crowded ( < 10 cows) scenes. The proposed system is a crucial stepping stone toward a fully automated tool for continuous monitoring of cows and their interactions with other individuals and the farm-building environment.
  •  
26.
  • Hemeren, Paul, et al. (författare)
  • Kinematic-based classification of social gestures and grasping by humans and machine learning techniques
  • 2021
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 8:308, s. 1-17
  • Tidskriftsartikel (refereegranskat)abstract
    • The affective motion of humans conveys messages that other humans perceive and understand without conventional linguistic processing. This ability to classify human movement into meaningful gestures or segments plays also a critical role in creating social interaction between humans and robots. In the research presented here, grasping and social gesture recognition by humans and four machine learning techniques (k-Nearest Neighbor, Locality-Sensitive Hashing Forest, Random Forest and Support Vector Machine) is assessed by using human classification data as a reference for evaluating the classification performance of machine learning techniques for thirty hand/arm gestures. The gestures are rated according to the extent of grasping motion on one task and the extent to which the same gestures are perceived as social according to another task. The results indicate that humans clearly rate differently according to the two different tasks. The machine learning techniques provide a similar classification of the actions according to grasping kinematics and social quality. Furthermore, there is a strong association between gesture kinematics and judgments of grasping and the social quality of the hand/arm gestures. Our results support previous research on intention-from-movement understanding that demonstrates the reliance on kinematic information for perceiving the social aspects and intentions in different grasping actions as well as communicative point-light actions. 
  •  
27.
  • Irfan, Bahar, et al. (författare)
  • Recommendations for designing conversational companion robots with older adults through foundation models
  • 2024
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 11
  • Tidskriftsartikel (refereegranskat)abstract
    • Companion robots are aimed to mitigate loneliness and social isolation among older adults by providing social and emotional support in their everyday lives. However, older adults’ expectations of conversational companionship might substantially differ from what current technologies can achieve, as well as from other age groups like young adults. Thus, it is crucial to involve older adults in the development of conversational companion robots to ensure that these devices align with their unique expectations and experiences. The recent advancement in foundation models, such as large language models, has taken a significant stride toward fulfilling those expectations, in contrast to the prior literature that relied on humans controlling robots (i.e., Wizard of Oz) or limited rule-based architectures that are not feasible to apply in the daily lives of older adults. Consequently, we conducted a participatory design (co-design) study with 28 older adults, demonstrating a companion robot using a large language model (LLM), and design scenarios that represent situations from everyday life. The thematic analysis of the discussions around these scenarios shows that older adults expect a conversational companion robot to engage in conversation actively in isolation and passively in social settings, remember previous conversations and personalize, protect privacy and provide control over learned data, give information and daily reminders, foster social skills and connections, and express empathy and emotions. Based on these findings, this article provides actionable recommendations for designing conversational companion robots for older adults with foundation models, such as LLMs and vision-language models, which can also be applied to conversational robots in other domains.
  •  
28.
  • Johal, Wafa, et al. (författare)
  • Envisioning social drones in education
  • 2022
  • Ingår i: Frontiers Robotics AI. - : Frontiers Media SA. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • Education is one of the major application fields in social Human-Robot Interaction. Several forms of social robots have been explored to engage and assist students in the classroom environment, from full-bodied humanoid robots to tabletop robot companions, but flying robots have been left unexplored in this context. In this paper, we present seven online remote workshops conducted with 20 participants to investigate the application area of Education in the Human-Drone Interaction domain; particularly focusing on what roles a social drone could fulfill in a classroom, how it would interact with students, teachers and its environment, what it could look like, and what would specifically differ from other types of social robots used in education. In the workshops we used online collaboration tools, supported by a sketch artist, to help envision a social drone in a classroom. The results revealed several design implications for the roles and capabilities of a social drone, in addition to promising research directions for the development and design in the novel area of drones in education.
  •  
29.
  • Karayiannidis, Yiannis, 1980-, et al. (författare)
  • Robot control for task performance and enhanced safety under impact
  • 2015
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 2:DEC
  • Tidskriftsartikel (refereegranskat)abstract
    • A control law combining motion performance quality and low stiffness reaction to unintended contacts is proposed in this work. It achieves prescribed performance evolution of the position error under disturbances up to a level related to model uncertainties and responds compliantly and with low stiffness to significant disturbances arising from impact forces. The controller employs a velocity reference signal in a model-based control law utilizing a non-linear time-dependent term, which embeds prescribed performance specifications and vanishes in case of significant disturbances. Simulation results with a three degrees of freedom (DOF) robot illustrate the motion performance and self-regulation of the output stiffness achieved by this controller under an external force, and highlights its advantages with respect to constant and switched impedance schemes. Experiments with a KUKA LWR 4+ demonstrate its performance under impact with a human while following a desired trajectory.
  •  
30.
  • Kyvik Nordås, Hildegunn, 1954-, et al. (författare)
  • Drivers of Automation and Consequences for Jobs in Engineering Services : An Agent-Based Modelling Approach
  • 2021
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 8
  • Tidskriftsartikel (refereegranskat)abstract
    • New technology is of little use if it is not adopted, and surveys show that less than 10% of firms use Artificial Intelligence. This paper studies the uptake of AI-driven automation and its impact on employment, using a dynamic agent-based model (ABM). It simulates the adoption of automation software as well as job destruction and job creation in its wake. There are two types of agents: manufacturing firms and engineering services firms. The agents choose between two business models: consulting or automated software. From the engineering firms' point of view, the model exhibits static economies of scale in the software model and dynamic (learning by doing) economies of scale in the consultancy model. From the manufacturing firms' point of view, switching to the software model requires restructuring of production and there are network effects in switching. The ABM matches engineering and manufacturing agents and derives employment of engineers and the tasks they perform, i.e. consultancy, software development, software maintenance, or employment in manufacturing. We find that the uptake of software is gradual; slow in the first few years and then accelerates. Software is fully adopted after about 18 years in the base line run. Employment of engineers shifts from consultancy to software development and to new jobs in manufacturing. Spells of unemployment may occur if skilled jobs creation in manufacturing is slow. Finally, the model generates boom and bust cycles in the software sector.
  •  
31.
  • Lager, Anders, et al. (författare)
  • Task Roadmaps: Speeding up Task Replanning
  • 2022
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • Modern industrial robots are increasingly deployed in dynamic environments, where unpredictable events are expected to impact the robot's operation. Under these conditions, runtime task replanning is required to avoid failures and unnecessary stops, while keeping up productivity. Task replanning is a long-sighted complement to path replanning, which is mostly concerned with avoiding unexpected obstacles that can lead to potentially unsafe situations. This paper focuses on task replanning as a way to dynamically adjust the robot behaviour to the continuously evolving environment in which it is deployed. Analogously to probabilistic roadmaps used in path planning, we propose the concept of Task roadmaps as a method to replan tasks by leveraging an offline generated search space. A graph-based model of the robot application is converted to a task scheduling problem to be solved by a proposed Branch and Bound (B&B) approach and two benchmark approaches: Mixed Integer Linear Programming (MILP) and Planning Domain Definition Language (PDDL). The B&B approach is proposed to compute the task roadmap, which is then reused to replan for unforeseeable events. The optimality and efficiency of this replanning approach are demonstrated in a simulation-based experiment with a mobile manipulator in a kitting application. In this study, the proposed B&B Task Roadmap replanning approach is significantly faster than a MILP solver and a PDDL based planner. 
  •  
32.
  • Lager, Anders, et al. (författare)
  • Task Roadmaps: Speeding Up Task Replanning : Corrigendum
  • 2022
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • In the original article, Listings 1 and 2 were not included during the typesetting process and were overlooked during production. The missing listings appear below. 
  •  
33.
  • Lii, Neal Y., et al. (författare)
  • Exodex Adam—A Reconfigurable Dexterous Haptic User Interface for the Whole Hand
  • 2022
  • Ingår i: Frontiers Robotics AI. - : Frontiers Media SA. - 2296-9144. ; 8
  • Tidskriftsartikel (refereegranskat)abstract
    • Applications for dexterous robot teleoperation and immersive virtual reality are growing. Haptic user input devices need to allow the user to intuitively command and seamlessly “feel” the environment they work in, whether virtual or a remote site through an avatar. We introduce the DLR Exodex Adam, a reconfigurable, dexterous, whole-hand haptic input device. The device comprises multiple modular, three degrees of freedom (3-DOF) robotic fingers, whose placement on the device can be adjusted to optimize manipulability for different user hand sizes. Additionally, the device is mounted on a 7-DOF robot arm to increase the user’s workspace. Exodex Adam uses a front-facing interface, with robotic fingers coupled to two of the user’s fingertips, the thumb, and two points on the palm. Including the palm, as opposed to only the fingertips as is common in existing devices, enables accurate tracking of the whole hand without additional sensors such as a data glove or motion capture. By providing “whole-hand” interaction with omnidirectional force-feedback at the attachment points, we enable the user to experience the environment with the complete hand instead of only the fingertips, thus realizing deeper immersion. Interaction using Exodex Adam can range from palpation of objects and surfaces to manipulation using both power and precision grasps, all while receiving haptic feedback. This article details the concept and design of the Exodex Adam, as well as use cases where it is deployed with different command modalities. These include mixed-media interaction in a virtual environment, gesture-based telemanipulation, and robotic hand–arm teleoperation using adaptive model-mediated teleoperation. Finally, we share the insights gained during our development process and use case deployments.
  •  
34.
  • Mansouri, Masoumeh, 1985-, et al. (författare)
  • Combining Task and Motion Planning : Challenges and Guidelines
  • 2021
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 8
  • Tidskriftsartikel (refereegranskat)abstract
    • Combined Task and Motion Planning (TAMP) is an area where no one-fits-all solution can exist. Many aspects of the domain, as well as operational requirements, have an effect on how algorithms and representations are designed. Frequently, trade-offs have to be madet o build a system that is effective. We propose five research questions that we believe need to be answered to solve real-world problems that involve combined TAMP. We show which decisions and trade-offs should be made with respect to these research questions, and illustrate these on examples of existing application domains. By doing so, this article aims to provide a guideline for designing combined TAMP solutions that are adequate and effective in the target scenario.
  •  
35.
  • Mirnig, Alexander G., et al. (författare)
  • External communication of automated shuttles: Results, experiences, and lessons learned from three European long-term research projects
  • 2022
  • Ingår i: Frontiers Robotics AI. - : Frontiers Media SA. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • Automated shuttles are already seeing deployment in many places across the world and have the potential to transform public mobility to be safer and more accessible. During the current transition phase from fully manual vehicles toward higher degrees of automation and resulting mixed traffic, there is a heightened need for additional communication or external indicators to comprehend automated vehicle actions for other road users. In this work, we present and discuss the results from seven studies (three preparatory and four main studies) conducted in three European countries aimed at investigating and providing a variety of such external communication solutions to facilitate the exchange of information between automated shuttles and other motorized and non-motorized road users.
  •  
36.
  • Mishra, Chinmaya, et al. (författare)
  • Does a robot's gaze aversion affect human gaze aversion?
  • 2023
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • Gaze cues serve an important role in facilitating human conversations and are generally considered to be one of the most important non-verbal cues. Gaze cues are used to manage turn-taking, coordinate joint attention, regulate intimacy, and signal cognitive effort. In particular, it is well established that gaze aversion is used in conversations to avoid prolonged periods of mutual gaze. Given the numerous functions of gaze cues, there has been extensive work on modelling these cues in social robots. Researchers have also tried to identify the impact of robot gaze on human participants. However, the influence of robot gaze behavior on human gaze behavior has been less explored. We conducted a within-subjects user study (N = 33) to verify if a robot's gaze aversion influenced human gaze aversion behavior. Our results show that participants tend to avert their gaze more when the robot keeps staring at them as compared to when the robot exhibits well-timed gaze aversions. We interpret our findings in terms of intimacy regulation: humans try to compensate for the robot's lack of gaze aversion.
  •  
37.
  • Mishra, Chinmaya, et al. (författare)
  • Real-time emotion generation in human-robot dialogue using large language models
  • 2023
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • Affective behaviors enable social robots to not only establish better connections with humans but also serve as a tool for the robots to express their internal states. It has been well established that emotions are important to signal understanding in Human-Robot Interaction (HRI). This work aims to harness the power of Large Language Models (LLM) and proposes an approach to control the affective behavior of robots. By interpreting emotion appraisal as an Emotion Recognition in Conversation (ERC) tasks, we used GPT-3.5 to predict the emotion of a robot’s turn in real-time, using the dialogue history of the ongoing conversation. The robot signaled the predicted emotion using facial expressions. The model was evaluated in a within-subjects user study (N = 47) where the model-driven emotion generation was compared against conditions where the robot did not display any emotions and where it displayed incongruent emotions. The participants interacted with the robot by playing a card sorting game that was specifically designed to evoke emotions. The results indicated that the emotions were reliably generated by the LLM and the participants were able to perceive the robot’s emotions. It was found that the robot expressing congruent model-driven facial emotion expressions were perceived to be significantly more human-like, emotionally appropriate, and elicit a more positive impression. Participants also scored significantly better in the card sorting game when the robot displayed congruent facial expressions. From a technical perspective, the study shows that LLMs can be used to control the affective behavior of robots reliably in real-time. Additionally, our results could be used in devising novel human-robot interactions, making robots more effective in roles where emotional interaction is important, such as therapy, companionship, or customer service.
  •  
38.
  • Moore, Roger K., et al. (författare)
  • Vocal interactivity in-and-between humans, animals and robots
  • 2016
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Research Foundation. - 2296-9144. ; 3
  • Forskningsöversikt (refereegranskat)abstract
    • Almost all animals exploit vocal signals for a range of ecologically-motivated purposes: detecting predators/prey and marking territory, expressing emotions, establishing social relations and sharing information. Whether it is a bird raising an alarm, a whale calling to potential partners, a dog responding to human commands, a parent reading a story with a child, or a business-person accessing stock prices using \emph{Siri}, vocalisation provides a valuable communication channel through which behaviour may be coordinated and controlled, and information may be distributed and acquired. Indeed, the ubiquity of vocal interaction has led to research across an extremely diverse array of fields, from assessing animal welfare, to understanding the precursors of human language, to developing voice-based human-machine interaction. Opportunities for cross-fertilisation between these fields abound; for example, using artificial cognitive agents to investigate contemporary theories of language grounding, using machine learning to analyse different habitats or adding vocal expressivity to the next generation of language-enabled autonomous social agents. However, much of the research is conducted within well-defined disciplinary boundaries, and many fundamental issues remain. This paper attempts to redress the balance by presenting a comparative review of vocal interaction within-and-between humans, animals and artificial agents (such as robots), and it identifies a rich set of open research questions that may benefit from an inter-disciplinary analysis.
  •  
39.
  • Oertel, Catharine, et al. (författare)
  • Engagement in Human-Agent Interaction: An Overview
  • 2020
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 7
  • Forskningsöversikt (refereegranskat)abstract
    • Engagement is a concept of the utmost importance in human-computer interaction, not only for informing the design and implementation of interfaces, but also for enabling more sophisticated interfaces capable of adapting to users. While the notion of engagement is actively being studied in a diverse set of domains, the term has been used to refer to a number of related, but different concepts. In fact it has been referred to across different disciplines under different names and with different connotations in mind. Therefore, it can be quite difficult to understand what the meaning of engagement is and how one study relates to another one accordingly. Engagement has been studied not only in human-human, but also in human-agent interactions i.e., interactions with physical robots and embodied virtual agents. In this overview article we focus on different factors involved in engagement studies, distinguishing especially between those studies that address task and social engagement, involve children and adults, are conducted in a lab or aimed for long term interaction. We also present models for detecting engagement and for generating multimodal behaviors to show engagement.
  •  
40.
  • Oertel, Catharine, et al. (författare)
  • Towards an Engagement-Aware Attentive Artificial Listener for Multi-Party Interactions
  • 2021
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 8
  • Tidskriftsartikel (refereegranskat)abstract
    • Listening to one another is essential to human-human interaction. In fact, we humans spend a substantial part of our day listening to other people, in private as well as in work settings. Attentive listening serves the function to gather information for oneself, but at the same time, it also signals to the speaker that he/she is being heard. To deduce whether our interlocutor is listening to us, we are relying on reading his/her nonverbal cues, very much like how we also use non-verbal cues to signal our attention. Such signaling becomes more complex when we move from dyadic to multi-party interactions. Understanding how humans use nonverbal cues in a multi-party listening context not only increases our understanding of human-human communication but also aids the development of successful human-robot interactions. This paper aims to bring together previous analyses of listener behavior analyses in human-human multi-party interaction and provide novel insights into gaze patterns between the listeners in particular. We are investigating whether the gaze patterns and feedback behavior, as observed in the humanhuman dialogue, are also beneficial for the perception of a robot in multi-party humanrobot interaction. To answer this question, we are implementing an attentive listening system that generates multi-modal listening behavior based on our human-human analysis. We are comparing our system to a baseline system that does not differentiate between different listener types in its behavior generation. We are evaluating it in terms of the participant's perception of the robot, his behavior as well as the perception of third-party observers.
  •  
41.
  • Pareto, Lena, 1962-, et al. (författare)
  • Children's learning-by-teaching with a social robot versus a younger child : Comparing interactions and tutoring styles.
  • 2022
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • Human peer tutoring is known to be effective for learning, and social robots are currently being explored for robot-assisted peer tutoring. In peer tutoring, not only the tutee but also the tutor benefit from the activity. Exploiting the learning-by-teaching mechanism, robots as tutees can be a promising approach for tutor learning. This study compares robots and humans by examining children's learning-by-teaching with a social robot and younger children, respectively. The study comprised a small-scale field experiment in a Swedish primary school, following a within-subject design. Ten sixth-grade students (age 12-13) assigned as tutors conducted two 30 min peer tutoring sessions each, one with a robot tutee and one with a third-grade student (age 9-10) as the tutee. The tutoring task consisted of teaching the tutee to play a two-player educational game designed to promote conceptual understanding and mathematical thinking. The tutoring sessions were video recorded, and verbal actions were transcribed and extended with crucial game actions and user gestures, to explore differences in interaction patterns between the two conditions. An extension to the classical initiation-response-feedback framework for classroom interactions, the IRFCE tutoring framework, was modified and used as an analytic lens. Actors, tutoring actions, and teaching interactions were examined and coded as they unfolded in the respective child-robot and child-child interactions during the sessions. Significant differences between the robot tutee and child tutee conditions regarding action frequencies and characteristics were found, concerning tutee initiatives, tutee questions, tutor explanations, tutee involvement, and evaluation feedback. We have identified ample opportunities for the tutor to learn from teaching in both conditions, for different reasons. The child tutee condition provided opportunities to engage in explanations to the tutee, experience smooth collaboration, and gain motivation through social responsibility for the younger child. The robot tutee condition provided opportunities to answer challenging questions from the tutee, receive plenty of feedback, and communicate using mathematical language. Hence, both conditions provide good learning opportunities for a tutor, but in different ways.
  •  
42.
  • Persson, Andreas, 1980-, et al. (författare)
  • Learning Actions to Improve the Perceptual Anchoring of Object
  • 2017
  • Ingår i: Frontiers in Robotics and AI. - Lausanne : Frontiers Media S.A.. - 2296-9144. ; 3:76
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, we examine how to ground symbols referring to objects in perceptual data from a robot system by examining object entities and their changes over time. In particular, we approach the challenge by 1) tracking and maintaining object entities over time; and 2) utilizing an artificial neural network to learn the coupling between words referring to actions and movement patterns of tracked object entities. For this purpose, we propose a framework which relies on the notations presented in perceptual anchoring. We further present a practical extension of the notation such that our framework can track and maintain the history of detected object entities. Our approach is evaluated using everyday objects typically found in a home environment. Our object classification module has the possibility to detect and classify over several hundred object categories. We demonstrate how the framework creates and maintains, both in space and time, representations of objects such as 'spoon' and 'coffee mug'. These representations are later used for training of different sequential learning algorithms in order to learn movement actions such as 'pour' and 'stir'. We finally exemplify how learned movements actions, combined with common-sense knowledge, further can be used to improve the anchoring process per se.
  •  
43.
  • Perugia, Giulia, et al. (författare)
  • Does the Goal Matter? : Emotion Recognition Tasks Can Change the Social Value of Facial Mimicry Towards Artificial Agents
  • 2021
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 8
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, we present a study aimed at understanding whether the embodiment and humanlikeness of an artificial agent can affect people's spontaneous and instructed mimicry of its facial expressions. The study followed a mixed experimental design and revolved around an emotion recognition task. Participants were randomly assigned to one level of humanlikeness (between-subject variable: humanlike, characterlike, or morph facial texture of the artificial agents) and observed the facial expressions displayed by three artificial agents differing in embodiment (within-subject variable: video-recorded robot, physical robot, and virtual agent) and a human (control). To study both spontaneous and instructed facial mimicry, we divided the experimental sessions into two phases. In the first phase, we asked participants to observe and recognize the emotions displayed by the agents. In the second phase, we asked them to look at the agents' facial expressions, replicate their dynamics as closely as possible, and then identify the observed emotions. In both cases, we assessed participants' facial expressions with an automated Action Unit (AU) intensity detector. Contrary to our hypotheses, our results disclose that the agent that was perceived as the least uncanny, and most anthropomorphic, likable, and co-present, was the one spontaneously mimicked the least. Moreover, they show that instructed facial mimicry negatively predicts spontaneous facial mimicry. Further exploratory analyses revealed that spontaneous facial mimicry appeared when participants were less certain of the emotion they recognized. Hence, we postulate that an emotion recognition goal can flip the social value of facial mimicry as it transforms a likable artificial agent into a distractor. Further work is needed to corroborate this hypothesis. Nevertheless, our findings shed light on the functioning of human-agent and human-robot mimicry in emotion recognition tasks and help us to unravel the relationship between facial mimicry, liking, and rapport.
  •  
44.
  • Perugia, Giulia, et al. (författare)
  • I Can See It in Your Eyes : Gaze as an Implicit Cue of Uncanniness and Task Performance in Repeated Interactions With Robots
  • 2021
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 8
  • Tidskriftsartikel (refereegranskat)abstract
    • Over the past years, extensive research has been dedicated to developing robust platforms and data-driven dialog models to support long-term human-robot interactions. However, little is known about how people's perception of robots and engagement with them develop over time and how these can be accurately assessed through implicit and continuous measurement techniques. In this paper, we explore this by involving participants in three interaction sessions with multiple days of zero exposure in between. Each session consists of a joint task with a robot as well as two short social chats with it before and after the task. We measure participants' gaze patterns with a wearable eye-tracker and gauge their perception of the robot and engagement with it and the joint task using questionnaires. Results disclose that aversion of gaze in a social chat is an indicator of a robot's uncanniness and that the more people gaze at the robot in a joint task, the worse they perform. In contrast with most HRI literature, our results show that gaze toward an object of shared attention, rather than gaze toward a robotic partner, is the most meaningful predictor of engagement in a joint task. Furthermore, the analyses of gaze patterns in repeated interactions disclose that people's mutual gaze in a social chat develops congruently with their perceptions of the robot over time. These are key findings for the HRI community as they entail that gaze behavior can be used as an implicit measure of people's perception of robots in a social chat and of their engagement and task performance in a joint task.
  •  
45.
  • Rietz, Finn, et al. (författare)
  • WoZ4U : An Open-Source Wizard-of-Oz Interface for Easy, Efficient and Robust HRI Experiments
  • 2021
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 8
  • Tidskriftsartikel (refereegranskat)abstract
    • Wizard-of-Oz experiments play a vital role in Human-Robot Interaction (HRI), as they allow for quick and simple hypothesis testing. Still, a publicly available general tool to conduct such experiments is currently not available in the research community, and researchers often develop and implement their own tools, customized for each individual experiment. Besides being inefficient in terms of programming efforts, this also makes it harder for non-technical researchers to conduct Wizard-of-Oz experiments. In this paper, we present a general and easy-to-use tool for the Pepper robot, one of the most commonly used robots in this context. While we provide the concrete interface for Pepper robots only, the system architecture is independent of the type of robot and can be adapted for other robots. A configuration file, which saves experiment-specific parameters, enables a quick setup for reproducible and repeatable Wizard-of-Oz experiments. A central server provides a graphical interface via a browser while handling the mapping of user input to actions on the robot. In our interface, keyboard shortcuts may be assigned to phrases, gestures, and composite behaviors to simplify and speed up control of the robot. The interface is lightweight and independent of the operating system. Our initial tests confirm that the system is functional, flexible, and easy to use. The interface, including source code, is made commonly available, and we hope that it will be useful for researchers with any background who want to conduct HRI experiments.
  •  
46.
  • Sciutti, Alessandra, et al. (författare)
  • Language Meddles with Infants' Processing of Observed Actions
  • 2016
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 3
  • Tidskriftsartikel (refereegranskat)abstract
    • When learning from actions, language can be a crucial source to specify the learning content. Understanding its interactions with action processing is therefore fundamental when attempting to model the development of human learning to replicate it in artificial agents. From early childhood, two different processes participate in shaping infants' understanding of the events occurring around them: Infants' motor system influences their action perception, driving their attention to the action goal; additionally, parental language influences the way children parse what they observe into relevant units. To date, however, it has barely been investigated whether these two cognitive processes- action understanding and language- are separate and independent or whether language might interfere with the former. To address this question, we evaluated whether a verbal narrative concurrent with action observation could avert 14-month-old infants' attention from an agent's action goal, which is otherwise naturally selected when the action is performed by an agent. The infants observed movies of an actor reaching and transporting balls into a box. In three between-subject conditions, the reaching movement was accompanied either with no audio (Base condition), a sine-wave sound (Sound condition), or a speech sample (Speech condition). The results show that the presence of a speech sample underlining the movement phase reduced significantly the number of predictive gaze shifts to the goal compared to the other conditions. Our findings thus indicate that any modeling of the interaction between language and action processing will have to consider a potential top-down effect of the former, as language can be a meddler in the predictive behavior typical of the observation of goal-oriented actions.
  •  
47.
  • Serholt, Sofia, 1986, et al. (författare)
  • Comparing a Robot Tutee to a Human Tutee in a Learning-By-Teaching Scenario with Children
  • 2022
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • Social robots are increasingly being studied in educational roles, including as tutees in learning-by-teaching applications. To explore the benefits and drawbacks of using robots in this way, it is important to study how robot tutees compare to traditional learning-by-teaching situations. In this paper, we report the results of a within-subjects field experiment that compared a robot tutee to a human tutee in a Swedish primary school. Sixth-grade students participated in the study as tutors in a collaborative mathematics game where they were responsible for teaching a robot tutee as well as a third-grade student in two separate sessions. Their teacher was present to provide support and guidance for both sessions. Participants’ perceptions of the interactions were then gathered through a set of quantitative instruments measuring their enjoyment and willingness to interact with the tutees again, communication and collaboration with the tutees, their understanding of the task, sense of autonomy as tutors, and perceived learning gains for tutor and tutee. The results showed that the two scenarios were comparable with respect to enjoyment and willingness to play again, as well as perceptions of learning gains. However, significant differences were found for communication and collaboration, which participants considered easier with a human tutee. They also felt significantly less autonomous in their roles as tutors with the robot tutee as measured by their stated need for their teacher’s help. Participants further appeared to perceive the activity as somewhat clearer and working better when playing with the human tutee. These findings suggest that children can enjoy engaging in peer tutoring with a robot tutee. However, the interactive capabilities of robots will need to improve quite substantially before they can potentially engage in autonomous and unsupervised interactions with children.
  •  
48.
  • Serholt, Sofia, 1986, et al. (författare)
  • Trouble and Repair in Child–Robot Interaction: A Study of Complex Interactions With a Robot Tutee in a Primary School Classroom
  • 2020
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 7:46
  • Tidskriftsartikel (refereegranskat)abstract
    • Today, robots are studied and expected to be used in a range of social roles within classrooms. Yet, due to a number of limitations in social robots, robot interactions should be expected to occasionally suffer from troublesome situations and breakdowns. In this paper, we explore this issue by studying how children handle interaction trouble with a robot tutee in a classroom setting. The findings have implications not only for the design of robots, but also for evaluating their benefit in, and for, educational contexts. In this study, we conducted video analysis of children's group interactions with a robot tutee in a classroom setting, in order to explore the nature of these troubles in the wild. Within each group, children took turns acting as the primary interaction partner for the robot within the context of a mathematics game. Specifically, we examined what types of situations constitute trouble in these child–robot interactions, the strategies that individual children employ to cope with this trouble, as well as the strategies employed by other actors witnessing the trouble. By means of Interaction Analysis, we studied the video recordings of nine group interaction sessions (n = 33 children) in primary school grades 2 and 4. We found that sources of trouble related to the robot's social norm violations, which could be either active or passive. In terms of strategies, the children either persisted in their attempts at interacting with the robot by adapting their behavior in different ways, distanced themselves from the robot, or sought the help of present adults (i.e., a researcher in a teacher role, or an experimenter) or their peers (i.e., the child's classmates in each group). In terms of the witnessing actors, they addressed the trouble by providing guidance directed at the child interacting with the robot, or by intervening in the interaction. These findings reveal the unspoken rules by which children orient toward social robots, the complexities of child–robot interaction in the wild, and provide insights on children's perspectives and expectations of social robots in classroom contexts.
  •  
49.
  • Strömbom, Daniel, et al. (författare)
  • Robot Collection and Transport of Objects : A Biomimetic Process
  • 2018
  • Ingår i: Frontiers in Robotics and AI. - : FRONTIERS MEDIA SA. - 2296-9144. ; 5
  • Tidskriftsartikel (refereegranskat)abstract
    • Animals as diverse as ants and humans are faced with the tasks of collecting, transporting or herding objects. Sheepdogs do this daily when they collect, herd, and maneuver flocks of sheep. Here, we adapt a shepherding algorithm inspired by sheepdogs to collect and transport objects using a robot. Our approach produces an effective robot collection process that autonomously adapts to changing environmental conditions and is robust to noise from various sources. We suggest that this biomimetic process could be implemented into suitable robots to perform collection and transport tasks that might include - for example - cleaning up objects in the environment, keeping animals away from sensitive areas or collecting and herding animals to a specific location. Furthermore, the feedback controlled interactions between the robot and objects which we study can be used to interrogate and understand the local and global interactions of real animal groups, thus offering a novel methodology of value to researchers studying collective animal behavior.
  •  
50.
  • Swaminathan, Chittaranjan Srinivas, 1991-, et al. (författare)
  • Benchmarking the utility of maps of dynamics for human-aware motion planning
  • 2022
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • Robots operating with humans in highly dynamic environments need not only react to moving persons and objects but also to anticipate and adhere to patterns of motion of dynamic agents in their environment. Currently, robotic systems use information about dynamics locally, through tracking and predicting motion within their direct perceptual range. This limits robots to reactive response to observed motion and to short-term predictions in their immediate vicinity. In this paper, we explore how maps of dynamics (MoDs) that provide information about motion patterns outside of the direct perceptual range of the robot can be used in motion planning to improve the behaviour of a robot in a dynamic environment. We formulate cost functions for four MoD representations to be used in any optimizing motion planning framework. Further, to evaluate the performance gain through using MoDs in motion planning, we design objective metrics, and we introduce a simulation framework for rapid benchmarking. We find that planners that utilize MoDs waste less time waiting for pedestrians, compared to planners that use geometric information alone. In particular, planners utilizing both intensity (proportion of observations at a grid cell where a dynamic entity was detected) and direction information have better task execution efficiency.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 60
Typ av publikation
tidskriftsartikel (56)
forskningsöversikt (4)
Typ av innehåll
refereegranskat (59)
övrigt vetenskapligt/konstnärligt (1)
Författare/redaktör
Castellano, Ginevra (4)
Skantze, Gabriel, 19 ... (4)
Leite, Iolanda (4)
Engwall, Olov (3)
Kragic, Danica, 1971 ... (2)
Pecora, Federico, 19 ... (2)
visa fler...
Nolte, Thomas (2)
Bensch, Suna (2)
Oertel, Catharine (2)
Loutfi, Amy, 1978- (2)
Magnusson, Martin, 1 ... (2)
Obaid, Mohammad, 198 ... (2)
Pareto, Lena, 1962- (2)
Bigun, Josef, 1961- (1)
Gärdenfors, Peter (1)
Hellström, Thomas (1)
Anund, Anna, 1964- (1)
Ghirlanda, Stefano (1)
Gredebäck, Gustaf (1)
Andersson, Jonas (1)
Fröhlich, Peter (1)
Lilienthal, Achim J. ... (1)
Beskow, Jonas (1)
Spampinato, Giacomo (1)
Ghadirzadeh, Ali (1)
Yang, Yanpeng (1)
Nilsson, Mikael (1)
Ardö, Håkan (1)
Calvo-Barajas, Natal ... (1)
Soda, Paolo (1)
Papadopoulos, Alessa ... (1)
Papadopoulos, Alessa ... (1)
Alexanderson, Simon (1)
Herlin, Anders Henri ... (1)
Albert, Saul (1)
Billing, Erik, 1981- (1)
Theodorou, Andreas, ... (1)
Längkvist, Martin, 1 ... (1)
Klügl, Franziska, 19 ... (1)
Karayiannidis, Yiann ... (1)
Stork, Johannes A, 1 ... (1)
Palmieri, Luigi (1)
Belpaeme, Tony (1)
Lenz, Reiner (1)
Paiva, Ana (1)
Rizgary, Daban (1)
Habibovic, Azra (1)
Lowe, Robert (1)
De Raedt, Luc, 1964- (1)
Balkenius, Christian (1)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (23)
Örebro universitet (9)
Uppsala universitet (6)
Chalmers tekniska högskola (6)
Göteborgs universitet (5)
Umeå universitet (5)
visa fler...
Högskolan i Skövde (5)
Högskolan i Halmstad (3)
Stockholms universitet (3)
Linköpings universitet (3)
Lunds universitet (3)
Högskolan Väst (2)
Mälardalens universitet (2)
Linnéuniversitetet (1)
RISE (1)
Sveriges Lantbruksuniversitet (1)
VTI - Statens väg- och transportforskningsinstitut (1)
visa färre...
Språk
Engelska (60)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (42)
Teknik (35)
Samhällsvetenskap (5)
Lantbruksvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy