SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:2296 9144 "

Sökning: L773:2296 9144

  • Resultat 1-25 av 60
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Arriola-Rios, Veronica E., et al. (författare)
  • Modeling of Deformable Objects for Robotic Manipulation : A Tutorial and Review
  • 2020
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 7
  • Forskningsöversikt (refereegranskat)abstract
    • Manipulation of deformable objects has given rise to an important set of open problems in the field of robotics. Application areas include robotic surgery, household robotics, manufacturing, logistics, and agriculture, to name a few. Related research problems span modeling and estimation of an object's shape, estimation of an object's material properties, such as elasticity and plasticity, object tracking and state estimation during manipulation, and manipulation planning and control. In this survey article, we start by providing a tutorial on foundational aspects of models of shape and shape dynamics. We then use this as the basis for a review of existing work on learning and estimation of these models and on motion planning and control to achieve desired deformations. We also discuss potential future lines of work.
  •  
2.
  • Balkenius, Christian, et al. (författare)
  • From focused thought to reveries : A memory system for a conscious robot
  • 2018
  • Ingår i: Frontiers in robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 5:APR
  • Tidskriftsartikel (refereegranskat)abstract
    • We introduce a memory model for robots that can account for many aspects of an inner world, ranging from object permanence, episodic memory, and planning to imagination and reveries. It is modeled after neurophysiological data and includes parts of the cerebral cortex together with models of arousal systems that are relevant for consciousness. The three central components are an identification network, a localization network, and a working memory network. Attention serves as the interface between the inner and the external world. It directs the flow of information from sensory organs to memory, as well as controlling top-down influences on perception. It also compares external sensations to internal top-down expectations. The model is tested in a number of computer simulations that illustrate how it can operate as a component in various cognitive tasks including perception, the A-not-B test, delayed matching to sample, episodic recall, and vicarious trial and error.
  •  
3.
  • Bartlett, Madeleine, et al. (författare)
  • What Can You See? : Identifying Cues on Internal States From the Movements of Natural Social Interactions
  • 2019
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Research Foundation. - 2296-9144. ; 6:49
  • Tidskriftsartikel (refereegranskat)abstract
    • In recent years, the field of Human-Robot Interaction (HRI) has seen an increasingdemand for technologies that can recognize and adapt to human behaviors and internalstates (e.g., emotions and intentions). Psychological research suggests that humanmovements are important for inferring internal states. There is, however, a need to betterunderstand what kind of information can be extracted from movement data, particularlyin unconstrained, natural interactions. The present study examines which internal statesand social constructs humans identify from movement in naturalistic social interactions.Participants either viewed clips of the full scene or processed versions of it displaying2D positional data. Then, they were asked to fill out questionnaires assessing their socialperception of the viewed material. We analyzed whether the full scene clips were moreinformative than the 2D positional data clips. First, we calculated the inter-rater agreementbetween participants in both conditions. Then, we employed machine learning classifiersto predict the internal states of the individuals in the videos based on the ratingsobtained. Although we found a higher inter-rater agreement for full scenes comparedto positional data, the level of agreement in the latter case was still above chance,thus demonstrating that the internal states and social constructs under study wereidentifiable in both conditions. A factor analysis run on participants’ responses showedthat participants identified the constructs interaction imbalance, interaction valence andengagement regardless of video condition. The machine learning classifiers achieveda similar performance in both conditions, again supporting the idea that movementalone carries relevant information. Overall, our results suggest it is reasonable to expecta machine learning algorithm, and consequently a robot, to successfully decode andclassify a range of internal states and social constructs using low-dimensional data (suchas the movements and poses of observed individuals) as input.
  •  
4.
  • Billing, Erik, 1981-, et al. (författare)
  • Finding Your Way from the Bed to the Kitchen: Reenacting and Recombining Sensorimotor Episodes Learned from Human Demonstration
  • 2016
  • Ingår i: Frontiers in Robotics and Ai. - Lausanne, Switzerland : Frontiers Media SA. - 2296-9144. ; 3
  • Tidskriftsartikel (refereegranskat)abstract
    • Several simulation theories have been proposed as an explanation for how humans and other agents internalize an "inner world" that allows them to simulate interactions with the external real world - prospectively and retrospectively. Such internal simulation of interaction with the environment has been argued to be a key mechanism behind mentalizing and planning. In the present work, we study internal simulations in a robot acting in a simulated human environment. A model of sensory-motor interactions with the environment is generated from human demonstrations and tested on a Robosoft Kompai robot. The model is used as a controller for the robot, reproducing the demonstrated behavior. Information from several different demonstrations is mixed, allowing the robot to produce novel paths through the environment, toward a goal specified by top-down contextual information. The robot model is also used in a covert mode, where the execution of actions is inhibited and perceptions are generated by a forward model. As a result, the robot generates an internal simulation of the sensory-motor interactions with the environment. Similar to the overt mode, the model is able to reproduce the demonstrated behavior as internal simulations. When experiences from several demonstrations are combined with a top-down goal signal, the system produces internal simulations of novel paths through the environment. These results can be understood as the robot imagining an "inner world" generated from previous experience, allowing it to try out different possible futures without executing actions overtly. We found that the success rate in terms of reaching the specified goal was higher during internal simulation, compared to overt action. These results are linked to a reduction in prediction errors generated during covert action. Despite the fact that the model is quite successful in terms of generating covert behavior toward specified goals, internal simulations display different temporal distributions compared to their overt counterparts. Links to human cognition and specifically mental imagery are discussed.
  •  
5.
  • Bimbo, Joao, et al. (författare)
  • Exploiting Robot Hand Compliance and Environmental Constraints for Edge Grasps
  • 2019
  • Ingår i: Frontiers in robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 6
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper presents a method to grasp objects that cannot be picked directly from a table, using a soft, underactuated hand. These grasps are achieved by dragging the object to the edge of a table, and grasping it from the protruding part, performing so-called slide-to-edge grasps. This type of approach, which uses the environment to facilitate the grasp, is named Environmental Constraint Exploitation (ECE), and has been shown to improve the robustness of grasps while reducing the planning effort. The paper proposes two strategies, namely Continuous Slide and Grasp and Pivot and Re-Grasp, that are designed to deal with different objects. In the first strategy, the hand is positioned over the object and assumed to stick to it during the sliding until the edge, where the fingers wrap around the object and pick it up. In the second strategy, instead, the sliding motion is performed using pivoting, and thus the object is allowed to rotate with respect to the hand that drags it toward the edge. Then, as soon as the object reaches the desired position, the hand detaches from the object and moves to grasp the object from the side. In both strategies, the hand positioning for grasping the object is implemented using a recently proposed functional model for soft hands, the closure signature, whereas the sliding motion on the table is executed by using a hybrid force-velocity controller. We conducted 320 grasping trials with 16 different objects using a soft hand attached to a collaborative robot arm. Experiments showed that the Continuous Slide and Grasp is more suitable for small objects (e.g., a credit card), whereas the Pivot and Re-Grasp performs better with larger objects (e.g., a big book). The gathered data were used to train a classifier that selects the most suitable strategy to use, according to the object size and weight. Implementing ECE strategies with soft hands is a first step toward their use in real-world scenarios, where the environment should be seen more as a help than as a hindrance.
  •  
6.
  • Brandão, Martim, et al. (författare)
  • Editorial : Responsible Robotics
  • 2022
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)
  •  
7.
  • Bütepage, Judith, et al. (författare)
  • Imitating by Generating : Deep Generative Models for Imitation of Interactive Tasks
  • 2020
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 7
  • Tidskriftsartikel (refereegranskat)abstract
    • To coordinate actions with an interaction partner requires a constant exchange of sensorimotor signals. Humans acquire these skills in infancy and early childhood mostly by imitation learning and active engagement with a skilled partner. They require the ability to predict and adapt to one's partner during an interaction. In this work we want to explore these ideas in a human-robot interaction setting in which a robot is required to learn interactive tasks from a combination of observational and kinesthetic learning. To this end, we propose a deep learning framework consisting of a number of components for (1) human and robot motion embedding, (2) motion prediction of the human partner, and (3) generation of robot joint trajectories matching the human motion. As long-term motion prediction methods often suffer from the problem of regression to the mean, our technical contribution here is a novel probabilistic latent variable model which does not predict in joint space but in latent space. To test the proposed method, we collect human-human interaction data and human-robot interaction data of four interactive tasks “hand-shake,” “hand-wave,” “parachute fist-bump,” and “rocket fist-bump.” We demonstrate experimentally the importance of predictive and adaptive components as well as low-level abstractions to successfully learn to imitate human behavior in interactive social tasks.
  •  
8.
  • Buyukgoz, Sera, et al. (författare)
  • Two ways to make your robot proactive : Reasoning about human intentions or reasoning about possible futures
  • 2022
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • Robots sharing their space with humans need to be proactive to be helpful. Proactive robots can act on their own initiatives in an anticipatory way to benefit humans. In this work, we investigate two ways to make robots proactive. One way is to recognize human intentions and to act to fulfill them, like opening the door that you are about to cross. The other way is to reason about possible future threats or opportunities and to act to prevent or to foster them, like recommending you to take an umbrella since rain has been forecast. In this article, we present approaches to realize these two types of proactive behavior. We then present an integrated system that can generate proactive robot behavior by reasoning on both factors: intentions and predictions. We illustrate our system on a sample use case including a domestic robot and a human. We first run this use case with the two separate proactive systems, intention-based and prediction-based, and then run it with our integrated system. The results show that the integrated system is able to consider a broader variety of aspects that are required for proactivity.
  •  
9.
  • Calvo Barajas, Natalia, 1988-, et al. (författare)
  • Hurry Up, We Need to Find the Key! How Regulatory Focus Design Affects Children's Trust in a Social Robot
  • 2021
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 8
  • Tidskriftsartikel (refereegranskat)abstract
    • In educational scenarios involving social robots, understanding the way robot behaviors affect children's motivation to achieve their learning goals is of vital importance. It is crucial for the formation of a trust relationship between the child and the robot so that the robot can effectively fulfill its role as a learning companion. In this study, we investigate the effect of a regulatory focus design scenario on the way children interact with a social robot. Regulatory focus theory is a type of self-regulation that involves specific strategies in pursuit of goals. It provides insights into how a person achieves a particular goal, either through a strategy focused on "promotion" that aims to achieve positive outcomes or through one focused on "prevention" that aims to avoid negative outcomes. In a user study, 69 children (7-9 years old) played a regulatory focus design goal-oriented collaborative game with the EMYS robot. We assessed children's perception of likability and competence and their trust in the robot, as well as their willingness to follow the robot's suggestions when pursuing a goal. Results showed that children perceived the prevention-focused robot as being more likable than the promotion-focused robot. We observed that a regulatory focus design did not directly affect trust. However, the perception of likability and competence was positively correlated with children's trust but negatively correlated with children's acceptance of the robot's suggestions.
  •  
10.
  • Chellapurath, Mrudul, et al. (författare)
  • Bioinspired robots can foster nature conservation
  • 2023
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • We live in a time of unprecedented scientific and human progress while being increasingly aware of its negative impacts on our planet’s health. Aerial, terrestrial, and aquatic ecosystems have significantly declined putting us on course to a sixth mass extinction event. Nonetheless, the advances made in science, engineering, and technology have given us the opportunity to reverse some of our ecosystem damage and preserve them through conservation efforts around the world. However, current conservation efforts are primarily human led with assistance from conventional robotic systems which limit their scope and effectiveness, along with negatively impacting the surroundings. In this perspective, we present the field of bioinspired robotics to develop versatile agents for future conservation efforts that can operate in the natural environment while minimizing the disturbance/impact to its inhabitants and the environment’s natural state. We provide an operational and environmental framework that should be considered while developing bioinspired robots for conservation. These considerations go beyond addressing the challenges of human-led conservation efforts and leverage the advancements in the field of materials, intelligence, and energy harvesting, to make bioinspired robots move and sense like animals. In doing so, it makes bioinspired robots an attractive, non-invasive, sustainable, and effective conservation tool for exploration, data collection, intervention, and maintenance tasks. Finally, we discuss the development of bioinspired robots in the context of collaboration, practicality, and applicability that would ensure their further development and widespread use to protect and preserve our natural world.
  •  
11.
  • Cooney, Martin, 1980-, et al. (författare)
  • PastVision+ : Thermovisual Inference of Recent Medicine Intake by Detecting Heated Objects and Cooled Lips
  • 2017
  • Ingår i: Frontiers in Robotics and AI. - Lausanne : Frontiers Media S.A.. - 2296-9144. ; 4
  • Tidskriftsartikel (refereegranskat)abstract
    • This article addresses the problem of how a robot can infer what a person has done recently, with a focus on checking oral medicine intake in dementia patients. We present PastVision+, an approach showing how thermovisual cues in objects and humans can be leveraged to infer recent unobserved human-object interactions. Our expectation is that this approach can provide enhanced speed and robustness compared to existing methods, because our approach can draw inferences from single images without needing to wait to observe ongoing actions and can deal with short-lasting occlusions; when combined, we expect a potential improvement in accuracy due to the extra information from knowing what a person has recently done. To evaluate our approach, we obtained some data in which an experimenter touched medicine packages and a glass of water to simulate intake of oral medicine, for a challenging scenario in which some touches were conducted in front of a warm background. Results were promising, with a detection accuracy of touched objects of 50% at the 15 s mark and 0% at the 60 s mark, and a detection accuracy of cooled lips of about 100 and 60% at the 15 s mark for cold and tepid water, respectively. Furthermore, we conducted a follow-up check for another challenging scenario in which some participants pretended to take medicine or otherwise touched a medicine package: accuracies of inferring object touches, mouth touches, and actions were 72.2, 80.3, and 58.3% initially, and 50.0, 81.7, and 50.0% at the 15 s mark, with a rate of 89.0% for person identification. The results suggested some areas in which further improvements would be possible, toward facilitating robot inference of human actions, in the context of medicine intake monitoring.
  •  
12.
  • Cooney, Martin, 1980- (författare)
  • Robot Art, in the Eye of the Beholder? : Personalized Metaphors Facilitate Communication of Emotions and Creativity
  • 2021
  • Ingår i: Frontiers in Robotics and AI. - Lausanne : Frontiers Media S.A.. - 2296-9144. ; 8
  • Tidskriftsartikel (refereegranskat)abstract
    • Socially assistive robots are being designed to support people's well-being in contexts such as art therapy where human therapists are scarce, by making art together with people in an appropriate way. A challenge is that various complex and idiosyncratic concepts relating to art, like emotions and creativity, are not yet well understood. Guided by the principles of speculative design, the current article describes the use of a collaborative prototyping approach involving artists and engineers to explore this design space, especially in regard to general and personalized art-making strategies. This led to identifying a goal: to generate representational or abstract art that connects emotionally with people's art and shows creativity. For this, an approach involving personalized "visual metaphors" was proposed, which balances the degree to which a robot's art is influenced by interacting persons. The results of a small user study via a survey provided further insight into people's perceptions: the general design was perceived as intended and appealed; as well, personalization via representational symbols appeared to lead to easier and clearer communication of emotions than via abstract symbols. In closing, the article describes a simplified demo, and discusses future challenges. Thus, the contribution of the current work lies in suggesting how a robot can seek to interact with people in an emotional and creative way through personalized art; thereby, the aim is to stimulate ideation in this promising area and facilitate acceptance of such robots in everyday human environments. © 2021 Cooney. 
  •  
13.
  • Coser, Omar, et al. (författare)
  • AI-based methodologies for exoskeleton-assisted rehabilitation of the lower limb : a review
  • 2024
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 11
  • Forskningsöversikt (refereegranskat)abstract
    • Over the past few years, there has been a noticeable surge in efforts to design novel tools and approaches that incorporate Artificial Intelligence (AI) into rehabilitation of persons with lower-limb impairments, using robotic exoskeletons. The potential benefits include the ability to implement personalized rehabilitation therapies by leveraging AI for robot control and data analysis, facilitating personalized feedback and guidance. Despite this, there is a current lack of literature review specifically focusing on AI applications in lower-limb rehabilitative robotics. To address this gap, our work aims at performing a review of 37 peer-reviewed papers. This review categorizes selected papers based on robotic application scenarios or AI methodologies. Additionally, it uniquely contributes by providing a detailed summary of input features, AI model performance, enrolled populations, exoskeletal systems used in the validation process, and specific tasks for each paper. The innovative aspect lies in offering a clear understanding of the suitability of different algorithms for specific tasks, intending to guide future developments and support informed decision-making in the realm of lower-limb exoskeleton and AI applications.
  •  
14.
  • Cumbal, Ronald, et al. (författare)
  • Stereotypical nationality representations in HRI : perspectives from international young adults
  • 2023
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • People often form immediate expectations about other people, or groups of people, based on visual appearance and characteristics of their voice and speech. These stereotypes, often inaccurate or overgeneralized, may translate to robots that carry human-like qualities. This study aims to explore if nationality-based preconceptions regarding appearance and accents can be found in people's perception of a virtual and a physical social robot. In an online survey with 80 subjects evaluating different first-language-influenced accents of English and nationality-influenced human-like faces for a virtual robot, we find that accents, in particular, lead to preconceptions on perceived competence and likeability that correspond to previous findings in social science research. In a physical interaction study with 74 participants, we then studied if the perception of competence and likeability is similar after interacting with a robot portraying one of four different nationality representations from the online survey. We find that preconceptions on national stereotypes that appeared in the online survey vanish or are overshadowed by factors related to general interaction quality. We do, however, find some effects of the robot's stereotypical alignment with the subject group, with Swedish subjects (the majority group in this study) rating the Swedish-accented robot as less competent than the international group, but, on the other hand, recalling more facts from the Swedish robot's presentation than the international group does. In an extension in which the physical robot was replaced by a virtual robot interacting in the same scenario online, we further found the same results that preconceptions are of less importance after actual interactions, hence demonstrating that the differences in the ratings of the robot between the online survey and the interaction is not due to the interaction medium. We hence conclude that attitudes towards stereotypical national representations in HRI have a weak effect, at least for the user group included in this study (primarily educated young students in an international setting).
  •  
15.
  • Das, Shemonto, et al. (författare)
  • Active learning strategies for robotic tactile texture recognition tasks
  • 2024
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 11
  • Tidskriftsartikel (refereegranskat)abstract
    • Accurate texture classification empowers robots to improve their perception and comprehension of the environment, enabling informed decision-making and appropriate responses to diverse materials and surfaces. Still, there are challenges for texture classification regarding the vast amount of time series data generated from robots’ sensors. For instance, robots are anticipated to leverage human feedback during interactions with the environment, particularly in cases of misclassification or uncertainty. With the diversity of objects and textures in daily activities, Active Learning (AL) can be employed to minimize the number of samples the robot needs to request from humans, streamlining the learning process. In the present work, we use AL to select the most informative samples for annotation, thus reducing the human labeling effort required to achieve high performance for classifying textures. We also use a sliding window strategy for extracting features from the sensor’s time series used in our experiments. Our multi-class dataset (e.g., 12 textures) challenges traditional AL strategies since standard techniques cannot control the number of instances per class selected to be labeled. Therefore, we propose a novel class-balancing instance selection algorithm that we integrate with standard AL strategies. Moreover, we evaluate the effect of sliding windows of two-time intervals (3 and 6 s) on our AL Strategies. Finally, we analyze in our experiments the performance of AL strategies, with and without the balancing algorithm, regarding f1-score, and positive effects are observed in terms of performance when using our proposed data pipeline. Our results show that the training data can be reduced to 70% using an AL strategy regardless of the machine learning model and reach, and in many cases, surpass a baseline performance. Finally, exploring the textures with a 6-s window achieves the best performance, and using either Extra Trees produces an average f1-score of 90.21% in the texture classification data set.
  •  
16.
  • Deichler, Anna, et al. (författare)
  • Learning to generate pointing gestures in situated embodied conversational agents
  • 2023
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • One of the main goals of robotics and intelligent agent research is to enable them to communicate with humans in physically situated settings. Human communication consists of both verbal and non-verbal modes. Recent studies in enabling communication for intelligent agents have focused on verbal modes, i.e., language and speech. However, in a situated setting the non-verbal mode is crucial for an agent to adapt flexible communication strategies. In this work, we focus on learning to generate non-verbal communicative expressions in situated embodied interactive agents. Specifically, we show that an agent can learn pointing gestures in a physically simulated environment through a combination of imitation and reinforcement learning that achieves high motion naturalness and high referential accuracy. We compared our proposed system against several baselines in both subjective and objective evaluations. The subjective evaluation is done in a virtual reality setting where an embodied referential game is played between the user and the agent in a shared 3D space, a setup that fully assesses the communicative capabilities of the generated gestures. The evaluations show that our model achieves a higher level of referential accuracy and motion naturalness compared to a state-of-the-art supervised learning motion synthesis model, showing the promise of our proposed system that combines imitation and reinforcement learning for generating communicative gestures. Additionally, our system is robust in a physically-simulated environment thus has the potential of being applied to robots.
  •  
17.
  • Dogan, Fethiye Irmak, et al. (författare)
  • Leveraging Explainability for Understanding Object Descriptions in Ambiguous 3D Environments
  • 2023
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • For effective human-robot collaboration, it is crucial for robots to understand requests from users perceiving the three-dimensional space and ask reasonable follow-up questions when there are ambiguities. While comprehending the users’ object descriptions in the requests, existing studies have focused on this challenge for limited object categories that can be detected or localized with existing object detection and localization modules. Further, they have mostly focused on comprehending the object descriptions using flat RGB images without considering the depth dimension. On the other hand, in the wild, it is impossible to limit the object categories that can be encountered during the interaction, and 3-dimensional space perception that includes depth information is fundamental in successful task completion. To understand described objects and resolve ambiguities in the wild, for the first time, we suggest a method leveraging explainability. Our method focuses on the active areas of an RGB scene to find the described objects without putting the previous constraints on object categories and natural language instructions. We further improve our method to identify the described objects considering depth dimension. We evaluate our method in varied real-world images and observe that the regions suggested by our method can help resolve ambiguities. When we compare our method with a state-of-the-art baseline, we show that our method performs better in scenes with ambiguous objects which cannot be recognized by existing object detectors. We also show that using depth features significantly improves performance in scenes where depth data is critical to disambiguate the objects and across our evaluation dataset that contains objects that can be specified with and without the depth dimension.
  •  
18.
  •  
19.
  • Engwall, Olov, et al. (författare)
  • Socio-cultural perception of robot backchannels
  • 2023
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • Introduction: Backchannels, i.e., short interjections by an interlocutor to indicate attention, understanding or agreement regarding utterances by another conversation participant, are fundamental in human-human interaction. Lack of backchannels or if they have unexpected timing or formulation may influence the conversation negatively, as misinterpretations regarding attention, understanding or agreement may occur. However, several studies over the years have shown that there may be cultural differences in how backchannels are provided and perceived and that these differences may affect intercultural conversations. Culturally aware robots must hence be endowed with the capability to detect and adapt to the way these conversational markers are used across different cultures. Traditionally, culture has been defined in terms of nationality, but this is more and more considered to be a stereotypic simplification. We therefore investigate several socio-cultural factors, such as the participants’ gender, age, first language, extroversion and familiarity with robots, that may be relevant for the perception of backchannels.Methods: We first cover existing research on cultural influence on backchannel formulation and perception in human-human interaction and on backchannel implementation in Human-Robot Interaction. We then present an experiment on second language spoken practice, in which we investigate how backchannels from the social robot Furhat influence interaction (investigated through speaking time ratios and ethnomethodology and multimodal conversation analysis) and impression of the robot (measured by post-session ratings). The experiment, made in a triad word game setting, is focused on if activity-adaptive robot backchannels may redistribute the participants’ speaking time ratio, and/or if the participants’ assessment of the robot is influenced by the backchannel strategy. The goal is to explore how robot backchannels should be adapted to different language learners to encourage their participation while being perceived as socio-culturally appropriate.Results: We find that a strategy that displays more backchannels towards a less active speaker may substantially decrease the difference in speaking time between the two speakers, that different socio-cultural groups respond differently to the robot’s backchannel strategy and that they also perceive the robot differently after the session.Discussion: We conclude that the robot may need different backchanneling strategies towards speakers from different socio-cultural groups in order to encourage them to speak and have a positive perception of the robot. 
  •  
20.
  • Fabricius, Victor, 1989-, et al. (författare)
  • Interactions Between Heavy Trucks and Vulnerable Road Users – A Systematic Review to Inform the Interactive Capabilities of Highly Automated Trucks
  • 2022
  • Ingår i: Frontiers in Robotics and AI. - Lausanne : Frontiers Media S.A.. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • This study investigates interactive behaviors and communication cues of heavy goods vehicles (HGVs) and vulnerable road users (VRUs) such as pedestrians and cyclists as a means of informing the interactive capabilities of highly automated HGVs. Following a general framing of road traffic interaction, we conducted a systematic literature review of empirical HGV-VRU studies found through the databases Scopus, ScienceDirect and TRID. We extracted reports of interactive road user behaviors and communication cues from 19 eligible studies and categorized these into two groups: 1) the associated communication channel/mechanism (e.g., nonverbal behavior), and 2) the type of communication cue (implicit/explicit). We found the following interactive behaviors and communication cues: 1) vehicle-centric (e.g., HGV as a larger vehicle, adapting trajectory, position relative to the VRU, timing of acceleration to pass the VRU, displaying information via human-machine interface), 2) driver-centric (e.g., professional driver, present inside/outside the cabin, eye-gaze behavior), and 3) VRU-centric (e.g., racer cyclist, adapting trajectory, position relative to the HGV, proximity to other VRUs, eye-gaze behavior). These cues are predominantly based on road user trajectories and movements (i.e., kinesics/proxemics nonverbal behavior) forming implicit communication, which indicates that this is the primary mechanism for HGV-VRU interactions. However, there are also reports of more explicit cues such as cyclists waving to say thanks, the use of turning indicators, or new types of external human-machine interfaces (eHMI). Compared to corresponding scenarios with light vehicles, HGV-VRU interaction patterns are to a high extent formed by the HGV’s size, shape and weight. For example, this can cause VRUs to feel less safe, drivers to seek to avoid unnecessary decelerations and accelerations, or lead to strategic behaviors due to larger blind-spots. Based on these findings, it is likely that road user trajectories and kinematic behaviors will form the basis for communication also for highly automated HGV-VRU interaction. However, it might also be beneficial to use additional eHMI to compensate for the loss of more social driver-centric cues or to signal other types of information. While controlled experiments can be used to gather such initial insights, deeper understanding of highly automated HGV-VRU interactions will also require naturalistic studies. © 2022 Fabricius, Habibovic, Rizgary, Andersson and Wärnestål.
  •  
21.
  • Felsberg, Michael, 1974-, et al. (författare)
  • Unbiased decoding of biologically motivated visual feature descriptors
  • 2015
  • Ingår i: Frontiers in Robotics and AI. - Lausanne, Switzerland : Frontiers Research Foundation. - 2296-9144. ; 2:20
  • Tidskriftsartikel (refereegranskat)abstract
    • Visual feature descriptors are essential elements in most computer and robot vision systems. They typically lead to an abstraction of the input data, images, or video, for further processing, such as clustering and machine learning. In clustering applications, the cluster center represents the prototypical descriptor of the cluster and estimates the corresponding signal value, such as color value or dominating flow orientation, by decoding the prototypical descriptor. Machine learning applications determine the relevance of respective descriptors and a visualization of the corresponding decoded information is very useful for the analysis of the learning algorithm. Thus decoding of feature descriptors is a relevant problem, frequently addressed in recent work. Also, the human brain represents sensorimotor information at a suitable abstraction level through varying activation of neuron populations. In previous work, computational models have been derived that agree with findings of neurophysiological experiments on the represen-tation of visual information by decoding the underlying signals. However, the represented variables have a bias toward centers or boundaries of the tuning curves. Despite the fact that feature descriptors in computer vision are motivated from neuroscience, the respec-tive decoding methods have been derived largely independent. From first principles, we derive unbiased decoding schemes for biologically motivated feature descriptors with a minimum amount of redundancy and suitable invariance properties. These descriptors establish a non-parametric density estimation of the underlying stochastic process with a particular algebraic structure. Based on the resulting algebraic constraints, we show formally how the decoding problem is formulated as an unbiased maximum likelihood estimator and we derive a recurrent inverse diffusion scheme to infer the dominating mode of the distribution. These methods are evaluated in experiments, where stationary points and bias from noisy image data are compared to existing methods.
  •  
22.
  • Fraune, Marlena R., et al. (författare)
  • Lessons Learned About Designing and Conducting Studies From HRI Experts
  • 2022
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 8
  • Tidskriftsartikel (refereegranskat)abstract
    • The field of human-robot interaction (HRI) research is multidisciplinary and requires researchers to understand diverse fields including computer science, engineering, informatics, philosophy, psychology, and more disciplines. However, it is hard to be an expert in everything. To help HRI researchers develop methodological skills, especially in areas that are relatively new to them, we conducted a virtual workshop, Workshop Your Study Design (WYSD), at the 2021 International Conference on HRI. In this workshop, we grouped participants with mentors, who are experts in areas like real-world studies, empirical lab studies, questionnaire design, interview, participatory design, and statistics. During and after the workshop, participants discussed their proposed study methods, obtained feedback, and improved their work accordingly. In this paper, we present 1) Workshop attendees' feedback about the workshop and 2) Lessons that the participants learned during their discussions with mentors. Participants' responses about the workshop were positive, and future scholars who wish to run such a workshop can consider implementing their suggestions. The main contribution of this paper is the lessons learned section, where the workshop participants contributed to forming this section based on what participants discovered during the workshop. We organize lessons learned into themes of 1) Improving study design for HRI, 2) How to work with participants - especially children -, 3) Making the most of the study and robot's limitations, and 4) How to collaborate well across fields as they were the areas of the papers submitted to the workshop. These themes include practical tips and guidelines to assist researchers to learn about fields of HRI research with which they have limited experience. We include specific examples, and researchers can adapt the tips and guidelines to their own areas to avoid some common mistakes and pitfalls in their research.
  •  
23.
  • Förster, Frank, et al. (författare)
  • Working with troubles and failures in conversation between humans and robots: workshop report
  • 2023
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper summarizes the structure and findings from the first Workshop on Troubles and Failures in Conversations between Humans and Robots. The workshop was organized to bring together a small, interdisciplinary group of researchers working on miscommunication from two complementary perspectives. One group of technology-oriented researchers was made up of roboticists, Human-Robot Interaction (HRI) researchers and dialogue system experts. The second group involved experts from conversation analysis, cognitive science, and linguistics. Uniting both groups of researchers is the belief that communication failures between humans and machines need to be taken seriously and that a systematic analysis of such failures may open fruitful avenues in research beyond current practices to improve such systems, including both speech-centric and multimodal interfaces. This workshop represents a starting point for this endeavour. The aim of the workshop was threefold: Firstly, to establish an interdisciplinary network of researchers that share a common interest in investigating communicative failures with a particular view towards robotic speech interfaces; secondly, to gain a partial overview of the “failure landscape” as experienced by roboticists and HRI researchers; and thirdly, to determine the potential for creating a robotic benchmark scenario for testing future speech interfaces with respect to the identified failures. The present article summarizes both the “failure landscape” surveyed during the workshop as well as the outcomes of the attempt to define a benchmark scenario.
  •  
24.
  • Güler, Püren, et al. (författare)
  • Visual state estimation in unseen environments through domain adaptation and metric learning
  • 2022
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 9
  • Tidskriftsartikel (refereegranskat)abstract
    • In robotics, deep learning models are used in many visual perception applications, including the tracking, detection and pose estimation of robotic manipulators. The state of the art methods however are conditioned on the availability of annotated training data, which may in practice be costly or even impossible to collect. Domain augmentation is one popular method to improve generalization to out-of-domain data by extending the training data set with predefined sources of variation, unrelated to the primary task. While this typically results in better performance on the target domain, it is not always clear that the trained models are capable to accurately separate the signals relevant to solving the task (e.g., appearance of an object of interest) from those associated with differences between the domains (e.g., lighting conditions). In this work we propose to improve the generalization capabilities of models trained with domain augmentation by formulating a secondary structured metric-space learning objective. We concentrate on one particularly challenging domain transfer task-visual state estimation for an articulated underground mining machine-and demonstrate the benefits of imposing structure on the encoding space. Our results indicate that the proposed method has the potential to transfer feature embeddings learned on the source domain, through a suitably designed augmentation procedure, and on to an unseen target domain.
  •  
25.
  • Guzhva, Oleksiy, et al. (författare)
  • Now you see me : Convolutional neural network based tracker for dairy cows
  • 2018
  • Ingår i: Frontiers in robotics and AI. - : Frontiers Media SA. - 2296-9144. ; 5:SEP
  • Tidskriftsartikel (refereegranskat)abstract
    • To maintain dairy cattle health and welfare at commensurable levels, analysis of the behaviors occurring between cows should be performed. This type of behavioral analysis is highly dependent on reliable and robust tracking of individuals, for it to be viable and applicable on-site. In this article, we introduce a novel method for continuous tracking and data-marker based identification of individual cows based on convolutional neural networks (CNNs). The methodology for data acquisition and overall implementation of tracking/identification is described. The Region of Interest (ROI) for the recordings was limited to a waiting area with free entrances to four automatic milking stations and a total size of 6 × 18 meters. There were 252 Swedish Holstein cows during the time of study that had access to the waiting area at a conventional dairy barn with varying conditions and illumination. Three Axis M3006-V cameras placed in the ceiling at 3.6 meters height and providing top-down view were used for recordings. The total amount of video data collected was 4 months, containing 500 million frames. To evaluate the system two 1-h recordings were chosen. The exit time and gate-id found by the tracker for each cow were compared with the exit times produced by the gates. In total there were 26 tracks considered, and 23 were correctly tracked. Given those 26 starting points, the tracker was able to maintain the correct position in a total of 101.29 min or 225 s in average per starting point/individual cow. Experiments indicate that a cow could be tracked close to 4 min before failure cases emerge and that cows could be successfully tracked for over 20 min in mildly-crowded ( < 10 cows) scenes. The proposed system is a crucial stepping stone toward a fully automated tool for continuous monitoring of cows and their interactions with other individuals and the farm-building environment.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-25 av 60
Typ av publikation
tidskriftsartikel (56)
forskningsöversikt (4)
Typ av innehåll
refereegranskat (59)
övrigt vetenskapligt/konstnärligt (1)
Författare/redaktör
Castellano, Ginevra (4)
Skantze, Gabriel, 19 ... (4)
Leite, Iolanda (4)
Engwall, Olov (3)
Kragic, Danica, 1971 ... (2)
Pecora, Federico, 19 ... (2)
visa fler...
Nolte, Thomas (2)
Bensch, Suna (2)
Oertel, Catharine (2)
Loutfi, Amy, 1978- (2)
Magnusson, Martin, 1 ... (2)
Obaid, Mohammad, 198 ... (2)
Pareto, Lena, 1962- (2)
Bigun, Josef, 1961- (1)
Gärdenfors, Peter (1)
Hellström, Thomas (1)
Anund, Anna, 1964- (1)
Ghirlanda, Stefano (1)
Gredebäck, Gustaf (1)
Andersson, Jonas (1)
Fröhlich, Peter (1)
Lilienthal, Achim J. ... (1)
Beskow, Jonas (1)
Spampinato, Giacomo (1)
Ghadirzadeh, Ali (1)
Yang, Yanpeng (1)
Nilsson, Mikael (1)
Ardö, Håkan (1)
Calvo-Barajas, Natal ... (1)
Soda, Paolo (1)
Papadopoulos, Alessa ... (1)
Papadopoulos, Alessa ... (1)
Alexanderson, Simon (1)
Herlin, Anders Henri ... (1)
Albert, Saul (1)
Billing, Erik, 1981- (1)
Theodorou, Andreas, ... (1)
Längkvist, Martin, 1 ... (1)
Klügl, Franziska, 19 ... (1)
Karayiannidis, Yiann ... (1)
Stork, Johannes A, 1 ... (1)
Palmieri, Luigi (1)
Belpaeme, Tony (1)
Lenz, Reiner (1)
Paiva, Ana (1)
Rizgary, Daban (1)
Habibovic, Azra (1)
Lowe, Robert (1)
De Raedt, Luc, 1964- (1)
Balkenius, Christian (1)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (23)
Örebro universitet (9)
Uppsala universitet (6)
Chalmers tekniska högskola (6)
Göteborgs universitet (5)
Umeå universitet (5)
visa fler...
Högskolan i Skövde (5)
Högskolan i Halmstad (3)
Stockholms universitet (3)
Linköpings universitet (3)
Lunds universitet (3)
Högskolan Väst (2)
Mälardalens universitet (2)
Linnéuniversitetet (1)
RISE (1)
Sveriges Lantbruksuniversitet (1)
VTI - Statens väg- och transportforskningsinstitut (1)
visa färre...
Språk
Engelska (60)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (42)
Teknik (35)
Samhällsvetenskap (5)
Lantbruksvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy