SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(ur Réhman Shafiq) "

Sökning: WFRF:(ur Réhman Shafiq)

  • Resultat 1-50 av 69
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Augustian, Midhumol, et al. (författare)
  • EEG Analysis from Motor Imagery to Control a Forestry Crane
  • 2018
  • Ingår i: Intelligent Human Systems Integration (IHSI 2018). - Cham : Springer. - 9783319738871 - 9783319738888 ; , s. 281-286
  • Konferensbidrag (refereegranskat)abstract
    • Brain-computer interface (BCI) systems can provide people with ability to communicate and control real world systems using neural activities. Therefore, it makes sense to develop an assistive framework for command and control of a future robotic system which can assist the human robot collaboration. In this paper, we have employed electroencephalographic (EEG) signals recorded by electrodes placed over the scalp. The human-hand movement based motor imagery mentalization is used to collect brain signals over the motor cortex area. The collected µ-wave (8–13 Hz) EEG signals were analyzed with event-related desynchronization/synchronization (ERD/ERS) quantification to extract a threshold between hand grip and release movement and this information can be used to control forestry crane grasping and release functionality. The experiment was performed with four healthy persons to demonstrate the proof-of concept BCI system. From this study, it is demonstrated that the proposed method has potential to assist the manual operation of crane operators performing advanced task with heavy cognitive work load.
  •  
2.
  • Ehatisham-ul-Haq, Muhammad, et al. (författare)
  • Identifying smartphone users based on their activity patterns via mobile sensing
  • 2017
  • Ingår i: Procedia Computer Science. - : Elsevier. - 1877-0509. ; 113, s. 202-209
  • Tidskriftsartikel (refereegranskat)abstract
    • Smartphones are ubiquitous devices that enable users to perform many of their routine tasks anytime and anywhere. With the advancement in information technology, smartphones are now equipped with sensing and networking capabilities that provide context-awareness for a wide range of applications. Due to ease of use and access, many users are using smartphones to store their private data, such as personal identifiers and bank account details. This type of sensitive data can be vulnerable if the device gets lost or stolen. The existing methods for securing mobile devices, including passwords, PINs and pattern locks are susceptible to many bouts such as smudge attacks. This paper proposes a novel framework to protect sensitive data on smartphones by identifying smartphone users based on their behavioral traits using smartphone embedded sensors. A series of experiments have been conducted for validating the proposed framework, which demonstrate its effectiveness.
  •  
3.
  • Fahlquist, Karin, et al. (författare)
  • Human animal machine interaction : Animal behavior awareness and digital experience
  • 2010
  • Ingår i: Proceedings of ACM Multimedia 2010 - Brave New Ideas, 25-29 October 2010, Firenze, Italy.. - New York, NY, USA : ACM. - 9781605589336 ; , s. 1269-1274
  • Konferensbidrag (refereegranskat)abstract
    • This paper proposes an intuitive wireless sensor/actuator based communication network for human animal interaction for a digital zoo. In order to enhance effective observation and control over wild life, we have built a wireless sensor network. 25 video transmitting nodes are installed for animal behavior observation and experimental vibrotactile collars have been designed for effective control in an animal park. The goal of our research is two-folded. Firstly, to provide an interaction between digital users and animals, and monitor the animal behavior for safety purposes. Secondly, we investigate how animals can be controlled or trained based on vibrotactile stimuli instead of electric stimuli. We have designed a multimedia sensor network for human animal machine interaction. We have evaluated the effect of human animal machine state communication model in field experiments.
  •  
4.
  • Halawani, Alaa, 1974-, et al. (författare)
  • Active vision for controlling an electric wheelchair
  • 2012
  • Ingår i: Intelligent Service Robotics. - : Springer. - 1861-2776 .- 1861-2784. ; 5:2, s. 89-98
  • Tidskriftsartikel (refereegranskat)abstract
    • Most of the electric wheelchairs available in the market are joystick-driven and therefore assume that the user is able to use his hand motion to steer the wheelchair. This does not apply to many users that are only capable of moving the head like quadriplegia patients. This paper presents a vision-based head motion tracking system to enable such patients of controlling the wheelchair. The novel approach that we suggest is to use active vision rather than passive to achieve head motion tracking. In active vision-based tracking, the camera is placed on the user’s head rather than in front of it. This makes tracking easier, more accurate and enhances the resolution. This is demonstrated theoretically and experimentally. The proposed tracking scheme is then used successfully to control our electric wheelchair to navigate in a real world environment.
  •  
5.
  • Halawani, Alaa, 1974-, et al. (författare)
  • Active Vision for Tremor Disease Monitoring
  • 2015
  • Ingår i: 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences AHFE 2015. - : Elsevier BV. ; , s. 2042-2048
  • Konferensbidrag (refereegranskat)abstract
    • The aim of this work is to introduce a prototype for monitoring tremor diseases using computer vision techniques.  While vision has been previously used for this purpose, the system we are introducing differs intrinsically from other traditional systems. The essential difference is characterized by the placement of the camera on the user’s body rather than in front of it, and thus reversing the whole process of motion estimation. This is called active motion tracking. Active vision is simpler in setup and achieves more accurate results compared to traditional arrangements, which we refer to as “passive” here. One main advantage of active tracking is its ability to detect even tiny motions using its simple setup, and that makes it very suitable for monitoring tremor disorders. 
  •  
6.
  • Harisubramanyabalaji, Subramani Palanisamy, et al. (författare)
  • Improving Image Classification Robustness Using Predictive Data Augmentation
  • 2018
  • Ingår i: Computer Safety, Reliability, and Security. - Cham : Springer. - 9783319992280 - 9783319992297 ; , s. 548-561
  • Konferensbidrag (refereegranskat)abstract
    • Safer autonomous navigation might be challenging if there is a failure in sensing system. Robust classifier algorithm irrespective of camera position, view angles, and environmental condition of an autonomous vehicle including different size & type (Car, Bus, Truck, etc.) can safely regulate the vehicle control. As training data play a crucial role in robust classification of traffic signs, an effective augmentation technique enriching the model capacity to withstand variations in urban environment is required. In this paper, a framework to identify model weakness and targeted augmentation methodology is presented. Based on off-line behavior identification, exact limitation of a Convolutional Neural Network (CNN) model is estimated to augment only those challenge levels necessary for improved classifier robustness. Predictive Augmentation (PA) and Predictive Multiple Augmentation (PMA) methods are proposed to adapt the model based on acquired challenges with a high numerical value of confidence. We validated our framework on two different training datasets and with 5 generated test groups containing varying levels of challenge (simple to extreme). The results show impressive improvement by 5-20% in overall classification accuracy thereby keeping their high confidence.
  •  
7.
  • Karlsson, Johannes, 1977-, et al. (författare)
  • Augmented reality to enhance vistors experience in a digital zoo
  • 2010
  • Ingår i: Proceedings of the 9th International Conference on Mobile and Ubiquitous Multimedia (ACM MUM'10), Limassol, Cyprus. - New York, NY, USA : ACM. - 9781450304245
  • Konferensbidrag (refereegranskat)
  •  
8.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (författare)
  • A pilot user's prospective in mobile robotic telepresence system
  • 2014
  • Ingår i: 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA 2014. - : IEEE conference proceedings. - 9786163618238
  • Konferensbidrag (refereegranskat)abstract
    • In this work we present an interactive video conferencing system specifically designed for enhancing the experience of video teleconferencing for a pilot user. We have used an Embodied Telepresence System (ETS) which was previously designed to enhance the experience of video teleconferencing for the collaborators. In this work we have deployed an ETS in a novel scenario to improve the experience of pilot user during distance communication. The ETS is used to adjust the view of the pilot user at the distance location (e.g. distance located conference/meeting). The velocity profile control for the ETS is developed which is implicitly controlled by the head of the pilot user. The experiment was conducted to test whether the view adjustment capability of an ETS increases the collaboration experience of video conferencing for the pilot user or not. The user study was conducted in which participants (pilot users) performed interaction using ETS and with traditional computer based video conferencing tool. Overall, the user study suggests the effectiveness of our approach and hence results in enhancing the experience of video conferencing for the pilot user.
  •  
9.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (författare)
  • Action Augmented Real Virtuality Design for Presence
  • 2018
  • Ingår i: IEEE Transactions on Cognitive and Developmental Systems. - : Institute of Electrical and Electronics Engineers (IEEE). - 2379-8920 .- 2379-8939. ; 10:4, s. 961-972
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper addresses the important question of how to design a video teleconferencing setup to increase the experience of spatial and social presence. Traditional video teleconferencing setups are lacking in presenting the nonverbal behaviors that humans express in face-to-face communication, which results in decrease in presence-experience. In order to address this issue, we first present a conceptual framework of presence for video teleconferencing. We introduce a modern presence concept called real virtuality and propose a new way of achieving this based on body or artifact actions to increase the feeling of presence, and we named this concept presence through actions. Using this new concept, we present the design of a novel action-augmented real virtuality prototype that considers the challenges related to the design of an action prototype, action embodiment, and face representation. Our action prototype is a telepresence mechatronic robot (TEBoT), and action embodiment is through a head-mounted display (HMD). The face representation solves the problem of face occlusion introduced by the HMD. The novel combination of HMD, TEBoT, and face representation algorithm has been tested in a real video teleconferencing scenario for its ability to solve the challenges related to spatial and social presence. We have performed a user study where the invited participants were requested to experience our novel setup and to compare it with a traditional video teleconferencing setup. The results show that the action capabilities not only increase the feeling of spatial presence but also increase the feeling of social presence of a remote person among local collaborators.
  •  
10.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (författare)
  • Distance Communication : Trends and Challenges and How to Resolve them
  • 2014
  • Ingår i: Strategies for a creative future with computer science, quality design and communicability. - Italy : Blue Herons Editions. - 9788896471104
  • Bokkapitel (refereegranskat)abstract
    • Distance communication is becoming an important part of our lives because of the current advancement in computer mediated communication (CMC). Despite the current advancement in CMC especially video teleconferencing; it is still far from face-to-face (FtF) interaction. This study will focus on the advancements in video teleconferencing; their trends and challenges. Furthermore, this work will present an overview of previously developed hardware and software techniques to improve the video teleconferencing experience. After discussing the background development of video teleconferencing, we will propose an intuitive solution to improve the video teleconferencing experience. To support the proposed solution, the Embodied Interaction based distance communication framework is developed. The effectiveness of this framework is validated by the user studies. To summarize this work has considered the following questions: What are the factors which make video teleconferencing different from face to face interaction?; What researchers have done so far to improve video teleconferencing?; How to further improve the teleconferencing experience?; How to add more non-verbal modalities to enhance the video teleconferencing experience? At the end we have also provided the future directions for embodied interaction based video teleconferencing.
  •  
11.
  • Khan, Muhammad Sikandar Lal, et al. (författare)
  • Embodied head gesture and distance education
  • 2015
  • Ingår i: 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences. - : Elsevier BV. ; , s. 2034-2041
  • Konferensbidrag (refereegranskat)abstract
    • Traditional distance education settings are usually based on video teleconferencing scenarios where human emotions and social presence are only expressed by the facial and vocal expressions which are not enough for complete presence; our bodily gestures and actions play a vital role in understanding exact meaning of communication patterns; especially in teaching-learning scenarios. The bodily gestures especially head movements offer cues to understand contextual knowledge during conversational dialogue. In this work, we have considered the tutor’s head gesture embodiment for educational assistive robot and compared the results with the standard audio-video tele-conferencing scenarios used in online education. We have used Embodied Telepresence System (ETS) to investigate the distance communication for online education setting. Our ETS emulates the head gestures of the human tutor for distance education scenario. Our experimental study includes ten able-bodied subjects (5 male and 5 female) from various countries. These participants were asked to participate in online education scenario through i) a traditional video conferencing tool, i.e. Skype and ii) an extended setup based on ETS. The statistical analysis is done on the results which indicates the effectiveness of our novel embodied head gesture based approach in distance education setting. Our experimental studies show that the proposed design of embodied head gesture based ETS is able to improve the user engagement in distance education.
  •  
12.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (författare)
  • Embodied tele-presence system (ETS) : Designing tele-presence for video teleconferencing
  • 2014
  • Ingår i: 3rd International Conference on Design, User Experience, and Usability: User Experience Design for Diverse Interaction Platforms and Environments, DUXU 2014, Held as Part of 16th International Conference on Human-Computer Interaction, HCI Int. 2014. - Cham : Springer International Publishing. - 9783319076256 - 9783319076263 ; , s. 574-585
  • Konferensbidrag (refereegranskat)abstract
    • In spite of the progress made in tele conferencing over the last decades, however, it is still far from a resolved issue. In this work, we present an intuitive video teleconferencing system, namely - Embodied Tele-Presence System (ETS) which is based on embodied interaction concept. This work proposes the results of a user study considering the hypothesis: " Embodied interaction based video conferencing system performs better than the standard video conferencing system in representing nonverbal behaviors, thus creating a 'feeling of presence' of a remote person among his/her local collaborators". Our ETS integrates standard audio-video conferencing with mechanical embodiment of head gestures of a remote person (as nonverbal behavior) to enhance the level of interaction. To highlight the technical challenges and design principles behind such tele-presence systems, we have also performed a system evaluation which shows the accuracy and efficiency of our ETS design. The paper further provides an overview of our case study and an analysis of our user evaluation. The user study shows that the proposed embodied interaction approach in video teleconferencing increases 'in-meeting interaction' and enhance a 'feeling of presence' among remote participant and his collaborators.
  •  
13.
  •  
14.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (författare)
  • Expressive multimedia : Bringing action to physical world by dancing-tablet
  • 2015
  • Ingår i: HCMC 2015 - Proceedings of the 2nd Workshop on Computational Models of Social Interactions. - New York, NY, USA : ACM Digital Library. - 9781450337472 ; , s. 9-14
  • Konferensbidrag (refereegranskat)abstract
    • The design practice based on embodied interaction concept focuses on developing new user interfaces for computer devices that merge the digital content with the physical world. In this work we have proposed a novel embodied interaction based design in which the 'action' information of the digital content is presented in the physical world. More specifically, we have mapped the 'action' information of the video content from the digital world into the physical world. The motivating example presented in this paper is our novel dancing-tablet, in which a tablet-PC dances on the rhythm of the song, hence the 'action' information is not just confined into a 2D flat display but also expressed by it. This paper presents i) hardware design of our mechatronic dancingtablet platform, ii) software algorithm for musical feature extraction and iii) embodied computational model for mapping 'action' information of the musical expression to the mechatronic platform. Our user study shows that the overall perception of audio-video music is enhanced by our dancingtablet setup.
  •  
15.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (författare)
  • Face-off : A face reconstruction technique for virtual reality (VR) scenarios
  • 2016
  • Ingår i: 14th European Conference on Computer Vision, ECCV 2016. - Cham : Springer. - 9783319466033 - 9783319466040 ; , s. 490-503
  • Konferensbidrag (refereegranskat)abstract
    • Virtual Reality (VR) headsets occlude a significant portion of human face. The real human face is required in many VR applications, for example, video teleconferencing. This paper proposes a wearable camera setup-based solution to reconstruct the real face of a person wearing VR headset. Our solution lies in the core of asymmetrical principal component analysis (aPCA). A user-specific training model is built using aPCA with full face, lips and eye region information. During testing phase, lower face region and partial eye information is used to reconstruct the wearer face. Online testing session consists of two phases, (i) calibration phase and (ii) reconstruction phase. In former, a small calibration step is performed to align test information with training data, while the later uses half face information to reconstruct the full face using aPCAbased trained-data. The proposed approach is validated with qualitative and quantitative analysis.
  •  
16.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (författare)
  • Gaze perception and awareness in smart devices
  • 2016
  • Ingår i: International journal of human-computer studies. - : Elsevier. - 1071-5819 .- 1095-9300. ; 92-93, s. 55-65
  • Tidskriftsartikel (refereegranskat)abstract
    • Eye contact and gaze awareness play a significant role for conveying emotions and intentions duringface-to-face conversation. Humans can perceive each other's gaze quite naturally and accurately. However,the gaze awareness/perception are ambiguous during video teleconferencing performed by computer-based devices (such as laptops, tablet, and smart-phones). The reasons for this ambiguity are the(i) camera position relative to the screen and (ii) 2D rendition of 3D human face i.e., the 2D screen isunable to deliver an accurate gaze during video teleconferencing. To solve this problem, researchers haveproposed different hardware setups with complex software algorithms. The most recent solution foraccurate gaze perception employs 3D interfaces, such as 3D screens and 3D face-masks. However, todaycommonly used video teleconferencing devices are smart devices with 2D screens. Therefore, there is aneed to improve gaze awareness/perception in these smart devices. In this work, we have revisited thequestion; how to improve a remote user's gaze awareness among his/her collaborators. Our hypothesis isthat ‘an accurate gaze perception can be achieved by the ‘3D embodiment’ of a remote user's head gestureduring video teleconferencing’. We have prototyped an embodied telepresence system (ETS) for the 3Dembodiment of a remote user's head. Our ETS is based on a 3-DOF neck robot with a mounted smartdevice (tablet PC). The electromechanical platform in combination with a smart device is a novel setupthat is used for studying gaze awareness/perception in 2D screen-based smart devices during videoteleconferencing. Two important gaze-related issues are considered in this work; namely (i) ‘Mona-LisaGaze Effect’ – the gaze is always directed at the person independent of his position in the room, and (ii)‘Gaze Awareness/Faithfulness’ – the ability to perceive an accurate spatial relationship between theobserving person and the object by an actor. Our results confirm that the 3D embodiment of a remoteuser head not only mitigates the Mona Lisa gaze effect but also supports three levels of gaze faithfulness,hence, accurately projecting the human gaze in distant space.
  •  
17.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (författare)
  • Head Orientation Modeling : Geometric Head Pose Estimation using Monocular Camera
  • 2013
  • Ingår i: Proceedings of the 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013. - : The Institute of Industrial Applications Engineers. ; , s. 149-153
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • In this paper we propose a simple and novel method for head pose estimation using 3D geometric modeling. Our algorithm initially employs Haar-like features to detect face and facial features area (more precisely eyes). For robust tracking of these regions; it also uses Tracking- Learning- Detection(TLD) frame work in a given video sequence. Based on two human eye-areas, we model a pivot point using distance measure devised by anthropometric statistic and MPEG-4 coding scheme. This simple geometrical approach relies on human facial feature structure on the camera-view plane to estimate yaw, pitch and roll of the human head. The accuracy and effectiveness of our proposed method is reported on live video sequence considering head mounted inertial measurement unit (IMU).
  •  
18.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (författare)
  • Moveable facial features in a Social Mediator
  • 2017
  • Ingår i: Intelligent Virtual Agents. - Cham : Springer London. - 9783319674001 - 9783319674018 ; , s. 205-208
  • Konferensbidrag (refereegranskat)abstract
    • A brief display of facial features based behavior has a majorimpact on personality perception in human-human communications.Creating such personality traits and representations in a social robot isa challenging task. In this paper, we propose an approach for a roboticface presentation based on moveable 2D facial features and present acomparative study when a synthesized face is projected using three setups;1) 3D mask, 2) 2D screen, and 3) our 2D moveable facial featurebased visualization. We found that robot’s personality and character ishighly influenced by the projected face quality as well as the motion offacial features.
  •  
19.
  • Khan, Muhammad Sikandar Lal, 1988- (författare)
  • Presence through actions : theories, concepts, and implementations
  • 2017
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • During face-to-face meetings, humans use multimodal information, including verbal information, visual information, body language, facial expressions, and other non-verbal gestures. In contrast, during computer-mediated-communication (CMC), humans rely either on mono-modal information such as text-only, voice-only, or video-only or on bi-modal information by using audiovisual modalities such as video teleconferencing. Psychologically, the difference between the two lies in the level of the subjective experience of presence, where people perceive a reduced feeling of presence in the case of CMC. Despite the current advancements in CMC, it is still far from face-to-face communication, especially in terms of the experience of presence.This thesis aims to introduce new concepts, theories, and technologies for presence design where the core is actions for creating presence. Thus, the contribution of the thesis can be divided into a technical contribution and a knowledge contribution. Technically, this thesis details novel technologies for improving presence experience during mediated communication (video teleconferencing). The proposed technologies include action robots (including a telepresence mechatronic robot (TEBoT) and a face robot), embodied control techniques (head orientation modeling and virtual reality headset based collaboration), and face reconstruction/retrieval algorithms. The introduced technologies enable action possibilities and embodied interactions that improve the presence experience between the distantly located participants. The novel setups were put into real experimental scenarios, and the well-known social, spatial, and gaze related problems were analyzed.The developed technologies and the results of the experiments led to the knowledge contribution of this thesis. In terms of knowledge contribution, this thesis presents a more general theoretical conceptual framework for mediated communication technologies. This conceptual framework can guide telepresence researchers toward the development of appropriate technologies for mediated communication applications. Furthermore, this thesis also presents a novel strong concept – presence through actions - that brings in philosophical understandings for developing presence- related technologies. The strong concept - presence through actions is an intermediate-level knowledge that proposes a new way of creating and developing future 'presence artifacts'. Presence- through actions is an action-oriented phenomenological approach to presence that differs from traditional immersive presence approaches that are based (implicitly) on rationalist, internalist views.
  •  
20.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (författare)
  • Tele-embodied agent (TEA) for video teleconferencing
  • 2013
  • Ingår i: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia, MUM 2013. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450326483
  • Konferensbidrag (refereegranskat)abstract
    • We propose a design of teleconference system which express nonverbal behavior (in our case head gesture) along with audio-video communication. Previous audio-video confer- encing systems are abortive in presenting nonverbal behav- iors which we, as human, usually use in face to face in- teraction. Recently, research in teleconferencing systems has expanded to include nonverbal cues of remote person in their distance communication. The accurate representation of non-verbal gestures for such systems is still challenging because they are dependent on hand-operated devices (like mouse or keyboard). Furthermore, they still lack in present- ing accurate human gestures. We believe that incorporating embodied interaction in video teleconferencing, (i.e., using the physical world as a medium for interacting with digi- tal technology) can result in nonverbal behavior represen- tation. The experimental platform named Tele-Embodied Agent (TEA) is introduced which incorperates remote per- son's head gestures to study new paradigm of embodied in- teraction in video teleconferencing. Our preliminary test shows accuracy (with respect to pose angles) and efficiency (with respect to time) of our proposed design. TEA can be used in medical field, factories, offices, gaming industry, music industry and for training.
  •  
21.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (författare)
  • Tele-immersion : Virtual reality based collaboration
  • 2016
  • Ingår i: 18th International Conference on Human-Computer Interaction, HCI International 2016. - Cham : Springer. - 9783319405476 - 9783319405483 ; , s. 352-357
  • Konferensbidrag (refereegranskat)abstract
    • The ‘perception of being present in another space’ during video teleconferencing is a challenging task. This work makes an effort to improve upon a user perception of being ‘present’ in another space by employing a virtual reality (VR) headset and an embodied telepresence system (ETS). In our application scenario, a remote participant uses a VR headset to collaborate with local collaborators. At a local site, an ETS is used as a physical representation of the remote participant among his/her local collaborators. The head movements of the remote person is mapped and presented by the ETS along with audio-video communication. Key considerations of complete design are discussed, where solutions to challenges related to head tracking, audio-video communication and data communication are presented. The proposed approach is validated by the user study where quantitative analysis is done on immersion and presence parameters.
  •  
22.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (författare)
  • Telepresence Mechatronic Robot (TEBoT) : Towards the design and control of socially interactive bio-inspired system
  • 2016
  • Ingår i: Journal of Intelligent & Fuzzy Systems. - : IOS Press. - 1064-1246 .- 1875-8967. ; 31:5, s. 2597-2610
  • Tidskriftsartikel (refereegranskat)abstract
    • Socially interactive systems are embodied agents that engage in social interactions with humans. From a design perspective, these systems are built by considering a biologically inspired design (Bio-inspired) that can mimic and simulate human-like communication cues and gestures. The design of a bio-inspired system usually consists of (i) studying biological characteristics, (ii) designing a similar biological robot, and (iii) motion planning, that can mimic the biological counterpart. In this article, we present a design, development, control-strategy and verification of our socially interactive bio-inspired robot, namely - Telepresence Mechatronic Robot (TEBoT). The key contribution of our work is an embodiment of a real human-neck movements by, i) designing a mechatronic platform based on the dynamics of a real human neck and ii) capturing the real head movements through our novel single-camera based vision algorithm. Our socially interactive bio-inspired system is based on an intuitive integration-design strategy that combines computer vision based geometric head pose estimation algorithm, model based design (MBD) approach and real-time motion planning techniques. We have conducted an extensive testing to demonstrate effectiveness and robustness of our proposed system.
  •  
23.
  •  
24.
  • Li, Bo, 1982-, et al. (författare)
  • Fast edge detection by center of mass
  • 2013
  • Ingår i: The 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013 (ICISIP2013). - Kitakyushu, Japan : The Institute of Industrial Applications Engineers. ; , s. 103-110
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, a novel edge detection method that computes image gradient using the concept of Center of Mass (COM) is presented. The algorithm runs with a constant number of operations per pixel independently from its scale by using integral image. Compared with the conventional convolutional edge detector such as Sobel edge detector, the proposed method performs faster when region size is larger than 9×9. The proposed method can be used as framework for multi-scale edge detectors when the goal is to achieve fast performance. Experimental results show that edge detection by COM is competent with Canny edge detection.
  •  
25.
  • Li, Bo, et al. (författare)
  • i-Function of Electronic Cigarette : Building Social Network by Electronic Cigarette
  • 2011
  • Ingår i: 2011 IEEE International Conferences on Internet of Things and Cyber, Physical and Social Computing. - Los Alamitos, CA, USA : IEEE Computer Society. - 9781457719769 ; , s. 634-637
  • Konferensbidrag (refereegranskat)abstract
    • In this paper the role of electronic cigarette (e-cigarette) is considered in context of social networking and internet based help for smoking cessation or reduction in smoking behavior. Electronic cigarette can be a good conversation starter and interaction device. Its interestingness can be used for social network building and thus using virtual  communities (e.g. Facebook, Twitter etc.) to exchange experiences and to support each other. A framework of social network interaction through interact function (i-function) of electronic cigarette is presented which enables two e-cigarette users to immediate interact when they are in close range. The framework also presents a functional possibility of reflecting people’s emotion on social network websites.
  •  
26.
  • Li, Bo, 1982-, et al. (författare)
  • Independent Thresholds on Multi-scale Gradient Images
  • 2013
  • Ingår i: The 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013 (ICISIP2013). - Kitakyushu, Japan : The Institute of Industrial Applications Engineers. ; , s. 124-131
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we propose a multi-scale edge detection algorithm based on proportional scale summing. Our analysis shows that proportional scale summing successfully improves edge detection rate by applying independent thresholds on multi-scale gradient images. The proposed method improves edge detection and localization by summing gradient images with a proportional parameter cn (c < 1); which ensures that the detected edges are as close as possible to the fine scale. We employ non-maxima suppression and thinning step similar to Canny edge detection framework on the summed gradient images. The proposed method can detect edges successfully and experimental results show that it leads to better edge detection performance than Canny edge detector and scale multiplication edge detector.
  •  
27.
  • Li, Bo, 1982-, et al. (författare)
  • Restricted Hysteresis Reduce Redundancy in Edge Detection
  • 2013
  • Ingår i: Journal of Signal and Information Processing. - 2159-4465 .- 2159-4481. ; 4:3B, s. 158-163
  • Tidskriftsartikel (refereegranskat)abstract
    • In edge detection algorithms, there is a common redundancy problem, especially when the gradient direction is close to -135°, -45°, 45°, and 135°. Double edge effect appears on the edges around these directions. This is caused by the discrete calculation of non-maximum suppression. Many algorithms use edge points as feature for further task such as line extraction, curve detection, matching and recognition. Redundancy is a very important factor of algorithm speed and accuracy. We find that most edge detection algorithms have redundancy of 50% in the worst case and 0% in the best case depending on the edge direction distribution. The common redundancy rate on natural images is approximately between 15% and 20%. Based on Canny’s framework, we propose a restriction in the hysteresis step. Our experiment shows that proposed restricted hysteresis reduce the redundancy successfully.
  •  
28.
  • Li, Liu, 1965-, et al. (författare)
  • Vibrotactile chair : A social interface for blind
  • 2006
  • Ingår i: Proceedings SSBA 2006. - Umeå : Umeå universitet. Institutionen för datavetenskap. ; , s. 117-120
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • In this paper we present our vibrotactile chair, a social interface for the blind. With this chair the blind can get on-line emotion information from the person he / she is heading to. This greatly enhances communication ability and improve the quality of social life of the blind. In the paper we are discussing technical challenges and design principles behind the chair, and introduce the experimental platform: tactile facial expression appearance recognition system (TEARS)TM".
  •  
29.
  • Lu, Zhihan, et al. (författare)
  • Anaglyph 3D stereoscopic visualization of 2D video based on fundamental matrix
  • 2013
  • Ingår i: 2013 International Conferenceon Virtual Reality and Visualization. - : IEEE. - 9780769551500 - 9781479923229 ; , s. 305-308
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we propose a simple Anaglyph 3Dstereo generation algorithm from 2D video sequence with monocularcamera. In our novel approach we employ camera poseestimation method to directly generate stereoscopic 3D from 2Dvideo without building depth map explicitly. Our cost effectivemethod is suitable for arbitrary real-world video sequence andproduces smooth results. We use image stitching based on planecorrespondence using fundamental matrix. To this end we alsodemonstrate that correspondence plane image stitching based onHomography matrix only cannot generate better result. Furthermore,we utilize the structure from motion (with fundamentalmatrix) based reconstructed camera pose model to accomplishvisual anaglyph 3D illusion. The proposed approach demonstratesa very good performance for most of the video sequences.
  •  
30.
  • Lu, Zhihan, et al. (författare)
  • Hand and Foot Gesture Interaction for Handheld Devices
  • 2013
  • Ingår i: MM '13 Proceedings of the 21st ACM international conference on Multimedia. - New York, NY, USA : ACM. - 9781450324045 ; , s. 621-624
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we present hand and foot based immersive multimodal interaction approach for handheld devices. A smart phone based immersive football game is designed as a proof of concept. Our proposed method combines input modalities (i.e. hand & foot) and provides a coordinated output to both modalities along with audio and video. In this work, human foot gesture is detected and tracked using template matching method and Tracking-Learning-Detection (TLD) framework. We evaluated our system's usability through a user study in which we asked participants to evaluate proposed interaction method. Our preliminary evaluation demonstrates the efficiency and ease of use of proposed multimodal interaction approach.
  •  
31.
  • Lu, Zhihan, et al. (författare)
  • Multi-Gesture based Football Game in Smart Phones
  • 2013
  • Ingår i: SA '13 SIGGRAPH Asia 2013 Symposium on Mobile Graphics and Interactive Applications. - NY, USA : Association for Computing Machinery (ACM). - 9781450326339
  • Konferensbidrag (refereegranskat)
  •  
32.
  • Lu, Zhihan, et al. (författare)
  • Multimodal Hand and Foot Gesture Interaction for Handheld Devices
  • 2014
  • Ingår i: ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP). - : Association for Computing Machinery (ACM). - 1551-6857 .- 1551-6865. ; 11:1
  • Tidskriftsartikel (refereegranskat)abstract
    • We present a hand-and-foot-based multimodal interaction approach for handheld devices. Our method combines input modalities (i.e., hand and foot) and provides a coordinated output to both modalities along with audio and video. Human foot gesture is detected and tracked using contour-based template detection (CTD) and Tracking-Learning-Detection (TLD) algorithm. 3D foot pose is estimated from passive homography matrix of the camera. 3D stereoscopic and vibrotactile are used to enhance the immersive feeling. We developed a multimodal football game based on the multimodal approach as a proof-of-concept. We confirm our systems user satisfaction through a user study.
  •  
33.
  • Lu, Zhihan, et al. (författare)
  • Touch-less interaction smartphone on go!
  • 2013
  • Ingår i: Proceeding of SIGGRAPH Asia 2013. - ACM New York, NY, USA : ACM. - 9781450326346
  • Konferensbidrag (refereegranskat)abstract
    • A smartphone touch-less interaction based on mixed hardware and software is proposed in this work. The software application renders circle menu application graphics and status information using smart phone’s screen, audio. Augmented reality image rendering technology is employed for a convenient finger-phone interaction. The users interact with the application using finger gesture motion behind the camera, which trigger the interaction event and generate activity sequences for interactive buffers. The combination of Contour based Template Matching (CTM) and Tracking-Learning-Detection (TLD) provides a core support for hand-gesture interaction by accurately detecting and tracking the hand gesture.
  •  
34.
  • Lu, Zhihan, et al. (författare)
  • WebVRGIS : a P2P network engine for VR data and GIS analysis
  • 2013
  • Ingår i: Lecture Notes in Computer Science. - Springer Berlin Heidelberg : Springer Berlin Heidelberg. - 9783642420535 ; , s. 503-510
  • Konferensbidrag (refereegranskat)abstract
    • A Peer-to-peer(P2P) network engine for geographic VR data and GIS analysis on 3D Globe is proposed, which synthesizes several latest information technologies including web virtual reality(VR), 3D geographical information system(GIS), 3D visualization and P2P network. The engine is used to organize and present massive spatial data such as remote sensing data, meanwhile to share and online publish by P2P based on hash. The P2P network makes a mapping of the users in real geographic space and the user avatar in the virtual scene, as well as the nodes in the virtual network. It also supports the integrated VRGIS functions including 3D spatial analysis functions, 3D visualization for spatial process and serves as a web engine for 3D globe and digital city.
  •  
35.
  • Lu, Zhihan, et al. (författare)
  • WebVRGIS : WebGIS based interactive online 3D virtual community
  • 2013
  • Ingår i: 2013 International Conference on Virtual Reality and Visualization (ICVRV 2013). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 94-99
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we present a WebVRGIS based Interactive Online 3D Virtual Community which is achieved based on WebGIS technology and web VR technology. It is Multi-Dimensional(MD) web geographic information system (WebGIS) based 3D interactive online virtual community which is a virtual real-time 3D communication systems and web systems development platform. It is capable of running on a variety of browsers. In this work, four key issues are studied: (1) Multi-source MD geographical data fusion of the WebGIS, (2) scene combination with 3D avatar, (3) massive data network dispatch, and (4) multi-user avatar real-time interactive. Our system is divided into three modules: data preprocessing, background management and front end user interaction. The core of the front interaction module is packaged in the MD map expression engine 3GWebMapper and the free plug-in network 3D rendering engine WebFlashVR. We have evaluated the robustness of our system on three campus of Ocean University of China(OUC) as a testing base. The results shows high efficiency, easy to use and robustness of our system.
  •  
36.
  • Lv, Z., et al. (författare)
  • An anaglyph 2D-3D stereoscopic video visualization approach
  • 2020
  • Ingår i: Multimedia tools and applications. - : Springer. - 1380-7501 .- 1573-7721. ; 79:1-2, s. 825-838
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, we propose a simple anaglyph 3D stereo generation algorithm from 2D video sequence with a monocular camera. In our novel approach, we employ camera pose estimation method to directly generate stereoscopic 3D from 2D video without building depth map explicitly. Our cost-effective method is suitable for arbitrary real-world video sequence and produces smooth results. We use image stitching based on plane correspondence using fundamental matrix. To this end, we also demonstrate that correspondence plane image stitching based on Homography matrix only cannot generate a better result. Furthermore, we utilize the structure-from-motion (with fundamental matrix) based reconstructed camera pose model to accomplish visual anaglyph 3D illusion. The anaglyph result is visualized by a contour based yellow-blue 3D color code. The proposed approach demonstrates a very good performance for most of the video sequences in the user study.
  •  
37.
  • Lv, Z., et al. (författare)
  • Finger in air : Touch-less interaction on smartphone
  • 2013
  • Ingår i: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia, MUM 2013. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450326483
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we present a vision based intuitive interaction method for smart mobile devices. It is based on markerless finger gesture detection which attempts to provide a 'natural user interface'. There is no additional hardware necessary for real-time finger gesture estimation. To evaluate the strengths and effectiveness of proposed method, we design two smart phone applications, namely circle menu application - provides user with graphics and smart phone's status information, and bouncing ball game- a finger gesture based bouncing ball application. The users interact with these applications using finger gestures through the smart phone's camera view, which trigger the interaction event and generate activity sequences for interactive buffers. Our preliminary user study evaluation demonstrates effectiveness and the social acceptability of proposed interaction approach.
  •  
38.
  • Lv, Z., et al. (författare)
  • Foot motion sensing : Augmented game interface based on foot interaction for smartphone
  • 2014
  • Ingår i: Conference on Human Factors in Computing Systems - Proceedings. - New York, NY, USA : ACM. - 9781450324748 ; , s. 293-296, s. 293-296
  • Konferensbidrag (refereegranskat)abstract
    • We designed and developmented two games: real-time augmented football game and augmented foot piano game to demonstrate a innovative interface based on foot motion sensing approach for smart phone. In the proposed novel interface, the computer vision based hybrid detection and tracking method provides a core support for foot interaction interface by accurately tracking the shoes. Based on the proposed interaction interface, wo demonstrations are developed, the applications employ augmented reality technology to render the game graphics and game status information on smart phones screen. The players interact with the game using foot interaction toward the rear camera, which triggers the interaction event. This interface supports basic foot motion sensing (i.e. direction of movement, velocity, rhythm).
  •  
39.
  • Lv, Zhihan, et al. (författare)
  • Touch-less interactive augmented reality game on vision-based wearable device
  • 2015
  • Ingår i: Personal and Ubiquitous Computing. - : Springer Science and Business Media LLC. - 1617-4909 .- 1617-4917. ; 19:3-4, s. 551-567
  • Tidskriftsartikel (refereegranskat)abstract
    • There is an increasing interest in creating pervasive games based on emerging interaction technologies. In order to develop touch-less, interactive and augmented reality games on vision-based wearable device, a touch-less motion interaction technology is designed and evaluated in this work. Users interact with the augmented reality games with dynamic hands/feet gestures in front of the camera, which triggers the interaction event to interact with the virtual object in the scene. Three primitive augmented reality games with eleven dynamic gestures are developed based on the proposed touch-less interaction technology as proof. At last, a comparing evaluation is proposed to demonstrate the social acceptability and usability of the touch-less approach, running on a hybrid wearable framework or with Google Glass, as well as workload assessment, user's emotions and satisfaction.
  •  
40.
  • Meurisch, Christian, et al. (författare)
  • SmartGuidance'17 : 2nd Workshop on Intelligent Personal Support of Human Behavior
  • 2017
  • Konferensbidrag (refereegranskat)abstract
    • In today's fast-paced environment, humans are faced with various problems such as information overload, stress, health and social issues. So-called anticipatory systems promise to approach these issues through personal guidance or support within a user's daily and professional life. The Second Workshop on Intelligent Personal Support of Human Behavior (SmartGuidance'17) aims to build on the success of the previous workshop (namely Smarticipation) organized in conjunction with UbiComp 2016, to continue discussing the latest research outcomes of anticipatory mobile systems. We invite the submission of papers within this emerging, interdisciplinary research field of anticipatory mobile computing that focuses on understanding, design, and development of such ubiquitous systems. We also welcome contributions that investigate human behaviors, underlying recognition and prediction models; conduct field studies; as well as propose novel HCI techniques to provide personal support. All workshop contributions will be published in supplemental proceedings of the UbiComp 2017 conference and included in the ACM Digital Library.
  •  
41.
  • Ortiz Morales, Daniel, 1984-, et al. (författare)
  • Generating Periodic Motions for the Butterfly Robot
  • 2013
  • Ingår i: Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. - : IEEE conference proceedings. - 2153-0858. - 9781467363587 ; , s. 2527-2532
  • Konferensbidrag (refereegranskat)abstract
    • We analyze the problem of dynamic non-prehensile manipulation by considering the example of thebutterfly robot. Our main objective is to study the problem of stabilizing periodic motions, which resemble some form of juggling acrobatics. To this end, we approach the problem by considering theframework of virtual holonomic constraints. Under this basis, we provide an analytical and systematic solution to the problems of trajectory planning and design of feedback controllers to guarantee orbital exponential stability. Results are presented in the form of simulation tests.
  •  
42.
  • Pizzamiglio, Sara, et al. (författare)
  • A multimodal approach to measure the distraction levels of pedestrians using mobile sensing
  • 2017
  • Ingår i: Procedia Computer Science. - : Elsevier. - 1877-0509. ; 113, s. 89-96
  • Tidskriftsartikel (refereegranskat)abstract
    • The emergence of smart phones has had a positive impact on society as the range of features and automation has allowed people to become more productive while they are on the move. On the contrary, the use of these devices has also become a distraction and hindrance, especially for pedestrians who use their phones whilst walking on the streets. This is reinforced by the fact that pedestrian injuries due to the use of mobile phones has now exceeded mobile phone related driver injuries. This paper describes an approach that measures the different levels of distraction encountered by pedestrians whilst they are walking. To distinguish between the distractions within the brain the proposed work analyses data collected from mobile sensors (accelerometers for movement, mobile EEG for electroencephalogram signals from the brain). The long-term motivation of the proposed work is to provide pedestrians with notifications as they approach potential hazards while they walk on the street conducting multiple tasks such as using a smart phone.
  •  
43.
  • Quan, Zhou, et al. (författare)
  • Face Recognition Using Dense SIFT Feature Alignment
  • 2016
  • Ingår i: Chinese journal of electronics. - : Institution of Engineering and Technology (IET). - 1022-4653 .- 2075-5597. ; 25:6, s. 1034-1039
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper addresses face recognition problem in a more challenging scenario where the training and test samples are both subject to the visual variations of poses, expressions and misalignments. We employ dense Scale-invariant feature transform (SIFT) feature matching as a generic transformation to roughly align training samples; and then identify input facial images via an improved sparse representation model based on the aligned training samples. Compared with previous methods, the extensive experimental results demonstrate the effectiveness of our method for the task of face recognition on three benchmark datasets.
  •  
44.
  • Shafiq, Rehan, et al. (författare)
  • Optical Transmission Plasmonic Color Filter with Wider Color Gamut Based on X-Shaped Nanostructure
  • 2022
  • Ingår i: Photonics. - : MDPI. - 2304-6732. ; 9:4
  • Tidskriftsartikel (refereegranskat)abstract
    • Extraordinary Optical Transmission Plasmonic Color Filters (EOT-PCFs) with nanostructures have the advantages of consistent color, small size, and excellent color reproduction, making them a suitable replacement for colorant-based filters. Currently, the color gamut created by plasmonic filters is limited to the standard red, green, blue (sRGB) color space, which limits their use in the future. To address this limitation, we propose a surface plasmon resonance (SPR) color filter scheme, which may provide a RGB-wide color gamut while exceeding the sRGB color space. On the surface of the aluminum film, a unique nanopattern structure is etched. The nanohole functions as a coupled grating that matches photon momentum to plasma when exposed to natural light. Metals and surfaces create surface plasmon resonances as light passes through the metal film. The plasmon resonance wavelength can be modified by modifying the structural parameters of the nanopattern to obtain varied transmission spectra. The International Commission on Illumination (CIE 1931) chromaticity diagram can convert the transmission spectrum into color coordinates and convert the spectrum into various colors. The color range and saturation can outperform existing color filters.
  •  
45.
  • Shafiq, ur Réhman, 1978-, et al. (författare)
  • Using Vibrotactile Language for Multimodal Human Animals Communication and Interaction
  • 2014
  • Ingår i: Proceedings of the 2014 Workshops on Advances in Computer Entertainment Conference, ACE '14. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450333146 ; , s. 1:1-1:5
  • Konferensbidrag (refereegranskat)abstract
    • In this work we aim to facilitate computer mediated multimodal communication and interaction between human and animal based on vibrotactile stimuli. To study and influence the behavior of animals, usually researchers use 2D/3D visual stimuli. However we use vibrotactile pattern based language which provides the opportunity to communicate and interact with animals. We have performed experiment with a vibrotactile based human-animal multimodal communication system to study the effectiveness of vibratory stimuli applied to the animal skin along with audio and visual stimuli. The preliminary results are encouraging and indicate that low-resolution tactual displays are effective in transmitting information.
  •  
46.
  • ur Réhman, Shafiq, 1978- (författare)
  • Expressing emotions through vibration for perception and control
  • 2010
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis addresses a challenging problem: “how to let the visually impaired ‘see’ others emotions”. We, human beings, are heavily dependent on facial expressions to express ourselves. A smile shows that the person you are talking to is pleased, amused, relieved etc. People use emotional information from facial expressions to switch between conversation topics and to determine attitudes of individuals. Missing emotional information from facial expressions and head gestures makes the visually impaired extremely difficult to interact with others in social events. To enhance the visually impaired’s social interactive ability, in this thesis we have been working on the scientific topic of ‘expressing human emotions through vibrotactile patterns’. It is quite challenging to deliver human emotions through touch since our touch channel is very limited. We first investigated how to render emotions through a vibrator. We developed a real time “lipless” tracking system to extract dynamic emotions from the mouth and employed mobile phones as a platform for the visually impaired to perceive primary emotion types. Later on, we extended the system to render more general dynamic media signals: for example, render live football games through vibration in the mobile for improving mobile user communication and entertainment experience. To display more natural emotions (i.e. emotion type plus emotion intensity), we developed the technology to enable the visually impaired to directly interpret human emotions. This was achieved by use of machine vision techniques and vibrotactile display. The display is comprised of a ‘vibration actuators matrix’ mounted on the back of a chair and the actuators are sequentially activated to provide dynamic emotional information. The research focus has been on finding a global, analytical, and semantic representation for facial expressions to replace state of the art facial action coding systems (FACS) approach. We proposed to use the manifold of facial expressions to characterize dynamic emotions. The basic emotional expressions with increasing intensity become curves on the manifold extended from the center. The blends of emotions lie between those curves, which could be defined analytically by the positions of the main curves. The manifold is the “Braille Code” of emotions. The developed methodology and technology has been extended for building assistive wheelchair systems to aid a specific group of disabled people, cerebral palsy or stroke patients (i.e. lacking fine motor control skills), who don’t have ability to access and control the wheelchair with conventional means, such as joystick or chin stick. The solution is to extract the manifold of the head or the tongue gestures for controlling the wheelchair. The manifold is rendered by a 2D vibration array to provide user of the wheelchair with action information from gestures and system status information, which is very important in enhancing usability of such an assistive system. Current research work not only provides a foundation stone for vibrotactile rendering system based on object localization but also a concrete step to a new dimension of human-machine interaction.
  •  
47.
  • ur Réhman, Shafiq, 1978-, et al. (författare)
  • Facial expression appearance for tactile perception of emotions
  • 2007
  • Ingår i: Proceedings of Swedish symposium on image analysis, 2007. ; , s. 157-160
  • Konferensbidrag (refereegranskat)abstract
    • To enhance the daily life experience for visually challengedpersons, the Facial Expression Appearance for Tactile systemis proposed. The manifold of facial expressions is used fortactile perception. Locally Linear Embedding (LLE) codingalgorithm is implemented for tactile display. LLE algorithmis extended to handle the real time video coding. The vibrotactilechair as a social interface for the blind is used to displaythe facial expression. The chair provides the visuallyimpaired with on-line emotion information about the personhe/she is approaching. The preliminary results are encouragingand show that it greatly enhances communication abilityof the visually impaired person.
  •  
48.
  • ur Réhman, Shafiq, 1978-, et al. (författare)
  • How to use manual labelers in evaluation of lip analysis systems?
  • 2009
  • Ingår i: Visual speech recognition. - USA : IGI Global. - 9781605661865 ; , s. 239-259
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • The purpose of this chapter is not to describe any lip analysis algorithms but rather to discuss some of the issues involved in evaluating and calibrating labeled lip features from human operators. In the chapter we question the common practice in the field: using manual lip labels directly as the ground truth for the evaluation of lip analysis algorithms. Our empirical results using an Expectation-Maximization procedure show that subjective noise in manual labelers can be quite significant in terms of quantifying both human and  algorithm extraction performance. To train and evaluate a lip analysis system one can measure the performance of human operators and infer the “ground truth” from the manual labelers, simultaneously.
  •  
49.
  • ur Réhman, Shafiq, 1978-, et al. (författare)
  • iFeeling : Vibrotactile rendering of human emotions on mobile phones
  • 2010. - 1st Edition
  • Ingår i: Mobile multimedia processing. - Heidelberg, Germany : Springer Berlin. - 9783642123481 ; , s. 1-20
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • Today, the mobile phone technology is mature enough to enable us to effectively interact with mobile phones using our three major senses namely, vision, hearing and touch. Similar to the camera, which adds interest and utility to mobile experience, the vibration motor in a mobile phone could give us a new possibility to improve interactivity and usability of mobile phones. In this chapter, we show that by carefully controlling vibration patterns, more than 1-bit information can be rendered with a vibration motor. We demonstrate how to turn a mobile phone into a social interface for the blind so that they can sense emotional information of others. The technical details are given on how to extract emotional information, design vibrotactile coding schemes, render vibrotactile patterns, as well as how to carry out user tests to evaluate its usability. Experimental studies and users tests have shown that we do get and interpret more than one bit emotional information. This shows a potential to enrich mobile phones communication among the users through the touch channel.
  •  
50.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 69
Typ av publikation
konferensbidrag (44)
tidskriftsartikel (15)
rapport (3)
bokkapitel (3)
doktorsavhandling (2)
licentiatavhandling (2)
visa fler...
visa färre...
Typ av innehåll
refereegranskat (52)
övrigt vetenskapligt/konstnärligt (17)
Författare/redaktör
ur Réhman, Shafiq (44)
Li, Haibo (32)
Ur Rèhman, Shafiq, 1 ... (20)
Liu, Li, 1965- (13)
Li, Haibo, 1965- (8)
Liu, Li (6)
visa fler...
Naeem, Usman (4)
Halawani, Alaa, 1974 ... (4)
Söderström, Ulrik (3)
Lv, Z. (3)
Halawani, Alaa (3)
Karlsson, Johannes, ... (2)
Abdalla, Hassan (1)
Sun, J. (1)
Liu, F. (1)
Serra-Capizzano, Ste ... (1)
Feng, S (1)
Beskow, Jonas (1)
Ahmad, Fayyaz (1)
Zaka Ullah, Malik (1)
Alibakhshikenari, Mo ... (1)
Dalarsson, Mariana (1)
Mühlhäuser, Max (1)
Ali, Esraa Mousa (1)
Lindahl, Olof (1)
Asif, Muhammad (1)
Mi, Yongcui, 1986- (1)
Nyberg, Mattias (1)
Azam, Muhammad Awais (1)
Li, Bo (1)
Sandvig, Axel (1)
Augustian, Midhumol (1)
Kotikawatte, Thivra (1)
Yongcui, Mi (1)
Evensmoen, Hallvard ... (1)
Khan, Abdullah (1)
Jevtic, Aleksandar (1)
Lei, Wang (1)
Xin, Wei (1)
La Hera, Pedro (1)
Ehatisham-ul-Haq, Mu ... (1)
Awais Azam, Muhammad (1)
Khaild, Asra (1)
Fahlquist, Karin (1)
Ren, Keni, 1983- (1)
Wark, Tim (1)
Günther, Sebastian (1)
Lv, Zhihan (1)
Gustavsson, Joakim (1)
Anani, Adi (1)
visa färre...
Lärosäte
Umeå universitet (67)
Kungliga Tekniska Högskolan (21)
Sveriges Lantbruksuniversitet (2)
Uppsala universitet (1)
Linköpings universitet (1)
Språk
Engelska (69)
Forskningsämne (UKÄ/SCB)
Teknik (47)
Naturvetenskap (40)
Medicin och hälsovetenskap (1)
Samhällsvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy