SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Khan Muhammad Sikandar Lal 1988 ) "

Search: WFRF:(Khan Muhammad Sikandar Lal 1988 )

  • Result 1-10 of 17
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (author)
  • A pilot user's prospective in mobile robotic telepresence system
  • 2014
  • In: 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA 2014. - : IEEE conference proceedings. - 9786163618238
  • Conference paper (peer-reviewed)abstract
    • In this work we present an interactive video conferencing system specifically designed for enhancing the experience of video teleconferencing for a pilot user. We have used an Embodied Telepresence System (ETS) which was previously designed to enhance the experience of video teleconferencing for the collaborators. In this work we have deployed an ETS in a novel scenario to improve the experience of pilot user during distance communication. The ETS is used to adjust the view of the pilot user at the distance location (e.g. distance located conference/meeting). The velocity profile control for the ETS is developed which is implicitly controlled by the head of the pilot user. The experiment was conducted to test whether the view adjustment capability of an ETS increases the collaboration experience of video conferencing for the pilot user or not. The user study was conducted in which participants (pilot users) performed interaction using ETS and with traditional computer based video conferencing tool. Overall, the user study suggests the effectiveness of our approach and hence results in enhancing the experience of video conferencing for the pilot user.
  •  
2.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (author)
  • Action Augmented Real Virtuality Design for Presence
  • 2018
  • In: IEEE Transactions on Cognitive and Developmental Systems. - : Institute of Electrical and Electronics Engineers (IEEE). - 2379-8920 .- 2379-8939. ; 10:4, s. 961-972
  • Journal article (peer-reviewed)abstract
    • This paper addresses the important question of how to design a video teleconferencing setup to increase the experience of spatial and social presence. Traditional video teleconferencing setups are lacking in presenting the nonverbal behaviors that humans express in face-to-face communication, which results in decrease in presence-experience. In order to address this issue, we first present a conceptual framework of presence for video teleconferencing. We introduce a modern presence concept called real virtuality and propose a new way of achieving this based on body or artifact actions to increase the feeling of presence, and we named this concept presence through actions. Using this new concept, we present the design of a novel action-augmented real virtuality prototype that considers the challenges related to the design of an action prototype, action embodiment, and face representation. Our action prototype is a telepresence mechatronic robot (TEBoT), and action embodiment is through a head-mounted display (HMD). The face representation solves the problem of face occlusion introduced by the HMD. The novel combination of HMD, TEBoT, and face representation algorithm has been tested in a real video teleconferencing scenario for its ability to solve the challenges related to spatial and social presence. We have performed a user study where the invited participants were requested to experience our novel setup and to compare it with a traditional video teleconferencing setup. The results show that the action capabilities not only increase the feeling of spatial presence but also increase the feeling of social presence of a remote person among local collaborators.
  •  
3.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (author)
  • Distance Communication : Trends and Challenges and How to Resolve them
  • 2014
  • In: Strategies for a creative future with computer science, quality design and communicability. - Italy : Blue Herons Editions. - 9788896471104
  • Book chapter (peer-reviewed)abstract
    • Distance communication is becoming an important part of our lives because of the current advancement in computer mediated communication (CMC). Despite the current advancement in CMC especially video teleconferencing; it is still far from face-to-face (FtF) interaction. This study will focus on the advancements in video teleconferencing; their trends and challenges. Furthermore, this work will present an overview of previously developed hardware and software techniques to improve the video teleconferencing experience. After discussing the background development of video teleconferencing, we will propose an intuitive solution to improve the video teleconferencing experience. To support the proposed solution, the Embodied Interaction based distance communication framework is developed. The effectiveness of this framework is validated by the user studies. To summarize this work has considered the following questions: What are the factors which make video teleconferencing different from face to face interaction?; What researchers have done so far to improve video teleconferencing?; How to further improve the teleconferencing experience?; How to add more non-verbal modalities to enhance the video teleconferencing experience? At the end we have also provided the future directions for embodied interaction based video teleconferencing.
  •  
4.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (author)
  • Embodied tele-presence system (ETS) : Designing tele-presence for video teleconferencing
  • 2014
  • In: 3rd International Conference on Design, User Experience, and Usability: User Experience Design for Diverse Interaction Platforms and Environments, DUXU 2014, Held as Part of 16th International Conference on Human-Computer Interaction, HCI Int. 2014. - Cham : Springer International Publishing. - 9783319076256 - 9783319076263 ; , s. 574-585
  • Conference paper (peer-reviewed)abstract
    • In spite of the progress made in tele conferencing over the last decades, however, it is still far from a resolved issue. In this work, we present an intuitive video teleconferencing system, namely - Embodied Tele-Presence System (ETS) which is based on embodied interaction concept. This work proposes the results of a user study considering the hypothesis: " Embodied interaction based video conferencing system performs better than the standard video conferencing system in representing nonverbal behaviors, thus creating a 'feeling of presence' of a remote person among his/her local collaborators". Our ETS integrates standard audio-video conferencing with mechanical embodiment of head gestures of a remote person (as nonverbal behavior) to enhance the level of interaction. To highlight the technical challenges and design principles behind such tele-presence systems, we have also performed a system evaluation which shows the accuracy and efficiency of our ETS design. The paper further provides an overview of our case study and an analysis of our user evaluation. The user study shows that the proposed embodied interaction approach in video teleconferencing increases 'in-meeting interaction' and enhance a 'feeling of presence' among remote participant and his collaborators.
  •  
5.
  •  
6.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (author)
  • Expressive multimedia : Bringing action to physical world by dancing-tablet
  • 2015
  • In: HCMC 2015 - Proceedings of the 2nd Workshop on Computational Models of Social Interactions. - New York, NY, USA : ACM Digital Library. - 9781450337472 ; , s. 9-14
  • Conference paper (peer-reviewed)abstract
    • The design practice based on embodied interaction concept focuses on developing new user interfaces for computer devices that merge the digital content with the physical world. In this work we have proposed a novel embodied interaction based design in which the 'action' information of the digital content is presented in the physical world. More specifically, we have mapped the 'action' information of the video content from the digital world into the physical world. The motivating example presented in this paper is our novel dancing-tablet, in which a tablet-PC dances on the rhythm of the song, hence the 'action' information is not just confined into a 2D flat display but also expressed by it. This paper presents i) hardware design of our mechatronic dancingtablet platform, ii) software algorithm for musical feature extraction and iii) embodied computational model for mapping 'action' information of the musical expression to the mechatronic platform. Our user study shows that the overall perception of audio-video music is enhanced by our dancingtablet setup.
  •  
7.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (author)
  • Face-off : A face reconstruction technique for virtual reality (VR) scenarios
  • 2016
  • In: 14th European Conference on Computer Vision, ECCV 2016. - Cham : Springer. - 9783319466033 - 9783319466040 ; , s. 490-503
  • Conference paper (peer-reviewed)abstract
    • Virtual Reality (VR) headsets occlude a significant portion of human face. The real human face is required in many VR applications, for example, video teleconferencing. This paper proposes a wearable camera setup-based solution to reconstruct the real face of a person wearing VR headset. Our solution lies in the core of asymmetrical principal component analysis (aPCA). A user-specific training model is built using aPCA with full face, lips and eye region information. During testing phase, lower face region and partial eye information is used to reconstruct the wearer face. Online testing session consists of two phases, (i) calibration phase and (ii) reconstruction phase. In former, a small calibration step is performed to align test information with training data, while the later uses half face information to reconstruct the full face using aPCAbased trained-data. The proposed approach is validated with qualitative and quantitative analysis.
  •  
8.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (author)
  • Gaze perception and awareness in smart devices
  • 2016
  • In: International journal of human-computer studies. - : Elsevier. - 1071-5819 .- 1095-9300. ; 92-93, s. 55-65
  • Journal article (peer-reviewed)abstract
    • Eye contact and gaze awareness play a significant role for conveying emotions and intentions duringface-to-face conversation. Humans can perceive each other's gaze quite naturally and accurately. However,the gaze awareness/perception are ambiguous during video teleconferencing performed by computer-based devices (such as laptops, tablet, and smart-phones). The reasons for this ambiguity are the(i) camera position relative to the screen and (ii) 2D rendition of 3D human face i.e., the 2D screen isunable to deliver an accurate gaze during video teleconferencing. To solve this problem, researchers haveproposed different hardware setups with complex software algorithms. The most recent solution foraccurate gaze perception employs 3D interfaces, such as 3D screens and 3D face-masks. However, todaycommonly used video teleconferencing devices are smart devices with 2D screens. Therefore, there is aneed to improve gaze awareness/perception in these smart devices. In this work, we have revisited thequestion; how to improve a remote user's gaze awareness among his/her collaborators. Our hypothesis isthat ‘an accurate gaze perception can be achieved by the ‘3D embodiment’ of a remote user's head gestureduring video teleconferencing’. We have prototyped an embodied telepresence system (ETS) for the 3Dembodiment of a remote user's head. Our ETS is based on a 3-DOF neck robot with a mounted smartdevice (tablet PC). The electromechanical platform in combination with a smart device is a novel setupthat is used for studying gaze awareness/perception in 2D screen-based smart devices during videoteleconferencing. Two important gaze-related issues are considered in this work; namely (i) ‘Mona-LisaGaze Effect’ – the gaze is always directed at the person independent of his position in the room, and (ii)‘Gaze Awareness/Faithfulness’ – the ability to perceive an accurate spatial relationship between theobserving person and the object by an actor. Our results confirm that the 3D embodiment of a remoteuser head not only mitigates the Mona Lisa gaze effect but also supports three levels of gaze faithfulness,hence, accurately projecting the human gaze in distant space.
  •  
9.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (author)
  • Head Orientation Modeling : Geometric Head Pose Estimation using Monocular Camera
  • 2013
  • In: Proceedings of the 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013. - : The Institute of Industrial Applications Engineers. ; , s. 149-153
  • Conference paper (other academic/artistic)abstract
    • In this paper we propose a simple and novel method for head pose estimation using 3D geometric modeling. Our algorithm initially employs Haar-like features to detect face and facial features area (more precisely eyes). For robust tracking of these regions; it also uses Tracking- Learning- Detection(TLD) frame work in a given video sequence. Based on two human eye-areas, we model a pivot point using distance measure devised by anthropometric statistic and MPEG-4 coding scheme. This simple geometrical approach relies on human facial feature structure on the camera-view plane to estimate yaw, pitch and roll of the human head. The accuracy and effectiveness of our proposed method is reported on live video sequence considering head mounted inertial measurement unit (IMU).
  •  
10.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (author)
  • Moveable facial features in a Social Mediator
  • 2017
  • In: Intelligent Virtual Agents. - Cham : Springer London. - 9783319674001 - 9783319674018 ; , s. 205-208
  • Conference paper (peer-reviewed)abstract
    • A brief display of facial features based behavior has a majorimpact on personality perception in human-human communications.Creating such personality traits and representations in a social robot isa challenging task. In this paper, we propose an approach for a roboticface presentation based on moveable 2D facial features and present acomparative study when a synthesized face is projected using three setups;1) 3D mask, 2) 2D screen, and 3) our 2D moveable facial featurebased visualization. We found that robot’s personality and character ishighly influenced by the projected face quality as well as the motion offacial features.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 17

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view