SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "hsv:(NATURVETENSKAP) hsv:(Data och informationsvetenskap) hsv:(Datorseende och robotik) srt2:(2010-2014)"

Sökning: hsv:(NATURVETENSKAP) hsv:(Data och informationsvetenskap) hsv:(Datorseende och robotik) > (2010-2014)

  • Resultat 1-50 av 757
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Yun, Yixiao, 1987, et al. (författare)
  • Maximum-Likelihood Object Tracking from Multi-View Video by Combining Homography and Epipolar Constraints
  • 2012
  • Ingår i: 6th ACM/IEEE Int'l Conf on Distributed Smart Cameras (ICDSC 12), Oct 30 - Nov.2, 2012, Hong Kong. - 9781450317726 ; , s. 6 pages-
  • Konferensbidrag (refereegranskat)abstract
    • This paper addresses problem of object tracking in occlusion scenarios, where multiple uncalibrated cameras with overlapping fields of view are used. We propose a novel method where tracking is first done independently for each view and then tracking results are mapped between each pair of views to improve the tracking in individual views, under the assumptions that objects are not occluded in all views and move uprightly on a planar ground which may induce a homography relation between each pair of views. The tracking results are mapped by jointly exploiting the geometric constraints of homography, epipolar and vertical vanishing point. Main contributions of this paper include: (a) formulate a reference model of multi-view object appearance using region covariance for each view; (b) define a likelihood measure based on geodesics on a Riemannian manifold that is consistent with the destination view by mapping both the estimated positions and appearances of tracked object from other views; (c) locate object in each individual view based on maximum likelihood criterion from multi-view estimations of object position. Experiments have been conducted on videos from multiple uncalibrated cameras, where targets experience long-term partial or full occlusions. Comparison with two existing methods and performance evaluations are also made. Test results have shown effectiveness of the proposed method in terms of robustness against tracking drifts caused by occlusions.
  •  
2.
  • Dobnik, Simon, 1977 (författare)
  • Coordinating spatial perspective in discourse
  • 2012
  • Ingår i: Proceedings of the Workshop on Vision and Language 2012 (VL'12): The 2nd Annual Meeting of the EPSRC Network on Vision and Language.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • We present results of an on-line data collection experiment where we investigate the assignment and coordination of spatial perspective between a pair of dialogue participants situated in a constrained virtual environment.
  •  
3.
  • Gu, Irene Yu-Hua, 1953, et al. (författare)
  • Grassmann Manifold Online Learning and Partial Occlusion Handling for Visual Object Tracking under Bayesian Formulation
  • 2012
  • Ingår i: Proceedings - International Conference on Pattern Recognition. - 1051-4651. - 9784990644109 ; , s. 1463-1466
  • Konferensbidrag (refereegranskat)abstract
    • This paper addresses issues of online learning and occlusion handling in video object tracking. Although manifold tracking is promising, large pose changes and long term partial occlusions of video objects remain challenging.We propose a novel manifold tracking scheme that tackles such problems, with the following main novelties: (a) Online estimation of object appearances on Grassmann manifolds; (b) Optimal criterion-based occlusion handling during online learning; (c) Nonlinear dynamic model for appearance basis matrix and its velocity; (b) Bayesian formulations separately for the tracking and the online learning process. Two particle filters are employed: one is on the manifold for generating appearance particles and another on the linear space for generating affine box particles. Tracking and online updating are performed in alternative fashion to mitigate the tracking drift. Experiments on videos have shown robust tracking performance especially when objects contain significantpose changes accompanied with long-term partial occlusions. Evaluations and comparisons with two existing methods provide further support to the proposed method.
  •  
4.
  • Höglund, Lars, 1946, et al. (författare)
  • Maskininlärningsbaserad indexering av digitaliserade museiartefakter - projektrapport
  • 2012
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • Projektet har genomfört försök med maskinbaserad analys och maskininlärning för automatisk indexering och analys av bilder som stöd för registrering av föremål i museibestånd. Resultaten visar att detta är möjligt för avgränsade delmängder i kombination med maskininlärning som stöd för, men inte som ersättning för, manuell analys. Projektet har också funnit behov av utveckling av ett användargränssnitt för både text och bildsökning och utvecklat en prototyplösning för detta, vilket finns dokumenterat i denna rapport och i ett separat appendix till rapporten. Materialet utgör grundunderlag för implementeringar som innebär utökade sökmöjligheter, effektivare registrering samt ett användarvänligt gränssnitt. Arbetet ligger i framkant av forskningsområdets resultat och etablerade metoder och kombinerar statististiska, lingvistiska och datavetenskapliga metoder. Se länk till rapport och även länk till appendix längre ned.
  •  
5.
  • Bååth, Rasmus, et al. (författare)
  • A prototype based resonance model of rhythm categorization
  • 2014
  • Ingår i: i-Perception. - : SAGE Publications. - 2041-6695. ; 5:6, s. 548-558
  • Tidskriftsartikel (refereegranskat)abstract
    • Categorization of rhythmic patterns is prevalent in musical practice, an example of this being the transcription of (possibly not strictly metrical) music into musical notation. In this article we implement a dynamical systems’ model of rhythm categorization based on the resonance theory of rhythm perception developed by Large (2010). This model is used to simulate the categorical choices of participants in two experiments of Desain and Honing (2003). The model accurately replicates the experimental data. Our results support resonance theory as a viable model of rhythm perception and show that by viewing rhythm perception as a dynamical system it is possible to model central properties of rhythm categorization.
  •  
6.
  • Liu, Yang, et al. (författare)
  • Movement Status Based Vision Filter for RoboCup Small-Size League
  • 2012
  • Ingår i: Advances in Automation and Robotics, Vol. 2. - Berlin, Heidelberg : Springer. - 9783642256455 - 9783642256462 ; , s. 79-86
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • Small-size soccer league is a division of the RoboCup (Robot world cup) competitions. Each team uses its own designed hardware and software to compete with othersunder defined rules. There are two kinds of data which the strategy system will receive from the dedicated server, one of them is the referee commands, and the other one is vision data. However, due to the network delay and the vision noise, we have to process the data before we can actually use it. Therefore, a certain mechanism is needed in this case.Instead of using some prevalent and complex algorithms, this paper proposes to solve this problem from simple kinematics and mathematics point of view, which can be implemented effectively by hobbyists and undergraduate students. We divide this problem by the speed status and deal it in three different situations. Testing results show good performance with this algorithm and great potential in filtering vision data thus forecasting actual coordinates of tracking objects.
  •  
7.
  • Svensson, Lennart (författare)
  • Image analysis and interactive visualization techniques for electron microscopy tomograms
  • 2014
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Images are an important data source in modern science and engineering. A continued challenge is to perform measurements on and extract useful information from the image data, i.e., to perform image analysis. Additionally, the image analysis results need to be visualized for best comprehension and to enable correct assessments. In this thesis, research is presented about digital image analysis and three-dimensional (3-D) visualization techniques for use with transmission electron microscopy (TEM) image data and in particular electron tomography, which provides 3-D reconstructions of the nano-structures. The electron tomograms are difficult to interpret because of, e.g., low signal-to-noise ratio, artefacts that stem from sample preparation and insufficient reconstruction information. Analysis is often performed by visual inspection or by registration, i.e., fitting, of molecular models to the image data. Setting up a visualization can however be tedious, and there may be large intra- and inter-user variation in how visualization parameters are set. Therefore, one topic studied in this thesis concerns automatic setup of the transfer function used in direct volume rendering of these tomograms. Results indicate that histogram and gradient based measures are useful in producing automatic and coherent visualizations. Furthermore, research has been conducted concerning registration of templates built using molecular models. Explorative visualization techniques are presented that can provide means of visualizing and navigating model parameter spaces. This gives a new type of visualization feedback to the biologist interpretating the TEM data. The introduced probabilistic template has an improved coverage of the molecular flexibility, by incorporating several conformations into a static model. Evaluation by cross-validation shows that the probabilistic template gives a higher correlation response than using a Protein Databank (PDB) devised model. The software ProViz (for Protein Visualization) is also introduced, where selected developed techniques have been incorporated and are demonstrated in practice.
  •  
8.
  • Kjellin, Andreas, et al. (författare)
  • Evaluating 2D and 3D Visualizations of Spatiotemporal Information
  • 2010
  • Ingår i: ACM Transactions on Applied Perception. - : Association for Computing Machinery (ACM). - 1544-3558 .- 1544-3965. ; 7:3, s. 19:1-19:23
  • Tidskriftsartikel (refereegranskat)abstract
    • Time-varying geospatial data presents some specific challenges for visualization. Here, we report the results of three experiments aiming at evaluating the relative efficiency of three existing visualization techniques for a class of such data. The class chosen was that of object movement, especially the movements of vehicles in a fictitious landscape. Two different tasks were also chosen. One was to predict where three vehicles will meet in the future given a visualization of their past movement history. The second task was to estimate the order in which four vehicles arrived at a specific place. Our results reveal that previous findings had generalized human perception in these situations and that large differences in user efficiency exist for a given task between different types of visualizations depicting the same data. Furthermore, our results are in line with earlier general findings on the nature of human perception of both object shape and scene changes. Finally, the need for new taxonomies of data and tasks based on results from perception research is discussed.
  •  
9.
  • Porathe, Thomas, 1954, et al. (författare)
  • Maritime Unmanned Navigation through Intelligence in Networks: The MUNIN project
  • 2013
  • Ingår i: 12th International Conference on Computer and IT Applications in the Maritime Industries, COMPIT’13, Cortona, 15-17 April 2013. - 9783892206637 ; , s. 177-183
  • Konferensbidrag (refereegranskat)abstract
    • This paper introduces the MUNIN project attempting to put a 200 meter long bulk carrier under autonomous control. The paper gives a motivation and an overview of the project as well as present some of the key research questions dealing with the human intervention possibilities. As a fallback option the unmanned ship is monitored by a shore control center which has the ability to take direct control if necessary. A challenge for the unmanned ship is the interaction with other manned ships.
  •  
10.
  • Berger, Christian, 1980, et al. (författare)
  • Model-based, Composable Simulation for the Development of Autonomous Miniature Vehicles
  • 2013
  • Ingår i: Mod4Sim'13: 3rd International Workshop on Model-driven Approaches for Simulation Engineering at SCS/IEEE Symposium on Theory of Modeling and Simulation in conjunction with SpringSim 2013. - 0735-9276. - 9781627480321 ; 45, s. 118-125
  • Konferensbidrag (refereegranskat)abstract
    • Modern vehicles contain nearly 100 embedded control units to realize various comfort and safety functions. These vehicle functions consist of a sensor, a data processing, and an actor layer to react intelligently to stimuli from their context. Recently, these sensors do not only perceive data from the own vehicle but more often also data from the vehicle's surroundings to understand the current traffic situation. Thus, traditional development and testing processes need to be rethought to ensure the required quality especially for safety-critical systems like a collision prevention system. On the example of 1:10 scale model cars, we outline our model-based and composable simulation approach that enabled the virtualized development of autonomous driving capabilities for model cars to compete in an international competition.
  •  
11.
  • Linde, Oskar, 1979-, et al. (författare)
  • Composed Complex-Cue Histograms : An Investigation of the Information Content in Receptive Field Based Image Descriptors for Object Recognition
  • 2012
  • Ingår i: Computer Vision and Image Understanding. - : Elsevier. - 1077-3142 .- 1090-235X. ; 116:4, s. 538-560
  • Tidskriftsartikel (refereegranskat)abstract
    • Recent work has shown that effective methods for recognizing objects and spatio-temporal events can be constructed based on histograms of receptive field like image operations. This paper presents the results of an extensive study of the performance of different types of receptive field like image descriptors for histogram-based object recognition, based on different combinations of image cues in terms of Gaussian derivatives or differential invariants applied to either intensity information, colour-opponent channels or both. A rich set of composed complex-cue image descriptors is introduced and evaluated with respect to the problems of (i) recognizing previously seen object instances from previously unseen views, and (ii) classifying previously unseen objects into visual categories. It is shown that there exist novel histogram descriptors with significantly better recognition performance compared to previously used histogram features within the same class. Specifically, the experiments show that it is possible to obtain more discriminative features by combining lower-dimensional scale-space features into composed complex-cue histograms. Furthermore, different types of image descriptors have different relative advantages with respect to the problems of object instance recognition vs. object category classification. These conclusions are obtained from extensive experimental evaluations on two mutually independent data sets. For the task of recognizing specific object instances, combined histograms of spatial and spatio-chromatic derivatives are highly discriminative, and several image descriptors in terms rotationally invariant (intensity and spatio-chromatic) differential invariants up to order two lead to very high recognition rates. For the task of category classification, primary information is contained in both first- and second-order derivatives, where second-order partial derivatives constitute the most discriminative cue. Dimensionality reduction by principal component analysis and variance normalization prior to training and recognition can in many cases lead to a significant increase in recognition or classification performance. Surprisingly high recognition rates can even be obtained with binary histograms that reveal the polarity of local scale-space features, and which can be expected to be particularly robust to illumination variations. An overall conclusion from this study is that compared to previously used lower-dimensional histograms, the use of composed complex-cue histograms of higher dimensionality reveals the co-variation of multiple cues and enables much better recognition performance, both with regard to the problems of recognizing previously seen objects from novel views and for classifying previously unseen objects into visual categories.
  •  
12.
  •  
13.
  • Pelliccione, Patrizio, et al. (författare)
  • The Role of Parts in the System Behaviour
  • 2014
  • Ingår i: International Workshop on Software Engineering for Resilient Systems. - 9783319122410 ; 8785, s. 24-39
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)
  •  
14.
  • Selpi, Selpi, 1977, et al. (författare)
  • Automatic real-time FACS-coder to anonymise drivers in eye tracker videos
  • 2011
  • Ingår i: Proceedings of the IEEE International Conference on Computer Vision. 2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011, Barcelona, 6-13 November 2011. - 9781467300629 ; , s. 1986-1993
  • Konferensbidrag (refereegranskat)abstract
    • Driver’s face is a rich source of information for understanding driver behaviour. From the driver’s face, onecould get an idea of the driver’s emotional state and wheres/he looks at. In recent years, naturalistic driving studies and field operational tests have been conducted to collect driver behavioural data, which often includes video of the driver, from many drivers driving for an extended period of time. Due to the Data Privacy Act, it is desirable to make the driver video anonymous, while preserving the original facial expressions. This paper describes our attempt to make a system that could do so. The system is a combination of an automatic Facial Action Coding System (FACS) coder based on Active Appearance Models (AAMs), a classifier that analyses local deformations in the AAM shape mesh and a 3D visualisation. The image acquisition hardware is based on a SmartEye eye tracker installed in a vehicle. The eye tracker we used provides a constant image quality independent of external illumination, which is a precondition for deploying the system in a vehicle environment. While the system uses Action Unit (AU) activations internally, the evaluation was done using the six basic emotions.
  •  
15.
  • Wahlberg, Fredrik, et al. (författare)
  • Data Mining Medieval Documents by Word Spotting
  • 2011
  • Ingår i: Proceedings of the 2011 Workshop on Historical Document Imaging and Processing. - New York : ACM. - 9781450309165 ; , s. 75-82
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents novel results for word spotting based on dynamic time warping applied to medieval manuscripts in Latin and Old Swedish. A target word is marked by a user, and the method automatically finds similar word forms in the document by matching them against the target. The method automatically identifies pages and lines. We show that our method improves accuracy compared to earlier proposals for this kind of handwriting. An advantage of the new method is that it performs matching within a text line without presupposing that the difficult problem of segmenting the text line into individual words has been solved. We evaluate our word spotting implementation on two medieval manuscripts representing two script types. We also show that it can be useful by helping a user find words in a manuscript and present graphs of word statistics as a function of page number.
  •  
16.
  • Åhlén, Julia, et al. (författare)
  • Early Recognition of Smoke in Digital Video
  • 2010
  • Ingår i: Advances in Communications, Computers, Systems, Circuits and Devices. - Athens : World Scientific and Engineering Academy and Society. - 9789604742509 ; , s. 301-306
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents a method for direct smoke detection from video without enhancement pre-processing steps. Smoke is characterized by transparency, gray color and irregularities in motion, which are hard to describe with the basic image features. A method for robust smoke description using a color balancing algorithm and turbulence calculation is presented in this work. Background extraction is used as a first step in processing. All moving objects are candidates for smoke. We make use of Gray World algorithm and compare the results with the original video sequence in order to extract image features within some particular gray scale interval. As a last step we calculate shape complexity of turbulent phenomena and apply it to the incoming video stream. As a result we extract only smoke from the video. Features such shadows, illumination changes and people will not be mistaken for smoke by the algorithm. This method gives an early indication of smoke in the observed scene.
  •  
17.
  • Yun, Yixiao, 1987, et al. (författare)
  • Image Classification by Multi-Class Boosting of Visual and Infrared Fusion with Applications to Object Pose Recognition
  • 2013
  • Ingår i: Swedish Symposium on Image Analysis (SSBA 2013), March 14-15, Göteborg, Sweden. ; , s. 4-
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • This paper proposes a novel method for multiview object pose classification through sequential learning and sensor fusion. The basic idea is to use images observed in visual and infrared (IR) bands, with the same sampling weight under a multi-class boosting framework. The main contribution of this paper is a multi-class AdaBoost classification framework where visual and infrared information interactively complement each other. This is achieved by learning hypothesis for visual and infrared bands independently and then fusing the optimized hypothesis subensembles. Experiments are conducted on several image datasets including a set of visual and thermal IR images containing 4844 face images in 5 different poses. Results have shown significant increase in classification rate as compared with an existing multi-class AdaBoost algorithm SAMME trained on visual or infrared images alone, as well as a simple baseline classification-fusion algorithm.
  •  
18.
  • Agrawal, Vikas, et al. (författare)
  • The AAAI-13 Conference Workshops
  • 2013
  • Ingår i: The AI Magazine. - : Association for the Advancement of Artificial Intelligence. - 0738-4602 .- 2371-9621. ; 34:4, s. 108-115
  • Tidskriftsartikel (refereegranskat)abstract
    • The AAAI-13 Workshop Program, a part of the 27th AAAI Conference on Artificial Intelligence, was held Sunday and Monday, July 14-15, 2013, at the Hyatt Regency Bellevue Hotel in Bellevue, Washington, USA. The program included 12 workshops covering a wide range of topics in artificial intelligence, including Activity Context-Aware System Architectures (WS-13-05); Artificial Intelligence and Robotics Methods in Computational Biology (WS-13-06); Combining Constraint Solving with Mining and Learning (WS-13-07); Computer Poker and Imperfect Information (WS-13-08); Expanding the Boundaries of Health Informatics Using Artificial Intelligence (WS-13-09); Intelligent Robotic Systems (WS-13-10); Intelligent Techniques for Web Personalization and Recommendation (WS-13-11); Learning Rich Representations from Low-Level Sensors (WS-13-12); Plan, Activity,, and Intent Recognition (WS-13-13); Space, Time, and Ambient Intelligence (WS-13-14); Trading Agent Design and Analysis (WS-13-15); and Statistical Relational Artificial Intelligence (WS-13-16)
  •  
19.
  • Burger, Birgitta, et al. (författare)
  • Communication of Musical Expression by Means of Mobile Robot Gestures
  • 2010
  • Ingår i: Journal on Multimodal User Interfaces. - Stockholm : Springer Science and Business Media LLC. - 1783-7677 .- 1783-8738. ; 3:1, s. 109-118
  • Tidskriftsartikel (refereegranskat)abstract
    • We developed a robotic system that can behave in an emotional way. A 3-wheeled simple robot with limited degrees of freedom was designed. Our goal was to make the robot displaying emotions in music performance by performing expressive movements. These movements have been compiled and programmed based on literature about emotion in music, musicians’ movements in expressive performances, and object shapes that convey different emotional intentions. The emotions happiness, anger, and sadness have been implemented in this way. General results from behavioral experiments show that emotional intentions can be synthesized, displayed and communicated by an artificial creature, also in constrained circumstances.
  •  
20.
  • Dobnik, Simon, 1977, et al. (författare)
  • Modelling language, action and perception in Type Theory with Records : Language, action and perception in TTR
  • 2012
  • Ingår i: Proceedings of the 7th International Workshop on Constraint Solving and Language Processing (CSLP'12). ; , s. 51-62
  • Konferensbidrag (refereegranskat)abstract
    • We present a formal model for natural language semantics using Type Theory with Records (TTR) and argue that it is better suited for representing the meaning of spatial descriptions than traditional for- mal semantic models. Spatial descriptions include perceptual, conceptual and discourse knowledge which we represent all in a single framework. Being a computational framework TTR is suited for modelling language and cognition of conversational agents in robotics and virtual environ- ments where interoperability between language, action and perception is required. The perceptual systems gain access to abstract conceptual meaning representations of language while the latter can be justified in action and perception.
  •  
21.
  •  
22.
  • Kavathatzopoulos, Iordanis, 1956- (författare)
  • Robots and systems as autonomous ethical agents
  • 2010
  • Ingår i: INTECH 2010. - Bangkok : Assumption University. - 9789746151108 ; , s. 5-9
  • Konferensbidrag (refereegranskat)abstract
    • IT systems and robots can help us to solve many problems caused by the quantity, variation and complexity of information; because we need to handle dangerous and risky situations; or because of our social and emotional needs like elderly care. In helping us, these systems have to make decisions and act accordingly to achieve the goals for which they were built. Ethical decision support tools can be integrated into robots and other decision making systems to secure that decisions are made according to the basic theories of philosophy and to the findings of psychological research.  This can be done, in non-independent systems, as a way for the system to report to its operator, and to support the operator's ethical decision making. On the other hand, fully independent systems should be able to regulate their own decision making strategies and processes. However, this cannot be based on normative predefined criteria, or on the ability to make choices, or on having own control, or on ability of rational processing.  It seems that it is necessary for an independent robot or decision system to have "emotions." That is, a kind of ultimate purposes that can lead the decision process, and depending on the circumstances, guide the adoption of a decision strategy, whatever it may be, rational, heuristic or automatic.
  •  
23.
  • Kavathatzopoulos, Iordanis, 1956-, et al. (författare)
  • What are ethical agents and how can we make them work properly?
  • 2011
  • Ingår i: The computational turn. - Münster : MV-Wissenschaft. - 9783869913551 ; , s. 151-153
  • Konferensbidrag (refereegranskat)abstract
    • To support ethical decision making in autonomous agents, we suggest to implement decision tools based on classical philosophy and psychological research. As one possible avenue, we present EthXpert, which supports the process of structuring and assembling information about situations with possible moral implications.
  •  
24.
  • Kiselev, Andrey, 1982-, et al. (författare)
  • Semi-Autonomous Cooperative Driving for Mobile Robotic Telepresence Systems
  • 2014
  • Ingår i: Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014). - New York, NY, USA : IEEE conference proceedings. - 9781450326582 ; , s. 104-104
  • Konferensbidrag (refereegranskat)abstract
    • Mobile robotic telepresence (MRP) has been introduced to allow communication from remote locations. Modern MRP systems offer rich capabilities for human-human interactions. However, simply driving a telepresence robot can become a burden especially for novice users, leaving no room for interaction at all. In this video we introduce a project which aims to incorporate advanced robotic algorithms into manned telepresence robots in a natural way to allow human-robot cooperation for safe driving. It also shows a very first implementation of cooperative driving based on extracting a safe drivable area in real time using the image stream received from the robot.
  •  
25.
  • Kiselev, Andrey, 1982-, et al. (författare)
  • The Effect of Field of View on Social Interaction in Mobile Robotic Telepresence Systems
  • 2014
  • Ingår i: Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014). - New York, NY, USA : IEEE conference proceedings. - 9781450326582 ; , s. 214-215
  • Konferensbidrag (refereegranskat)abstract
    • One goal of mobile robotic telepresence for social interaction is to design robotic units that are easy to operate for novice users and promote good interaction between people. This paper presents an exploratory study on the effect of camera orientation and field of view on the interaction between a remote and local user. Our findings suggest that limiting the width of the field of view can lead to better interaction quality as it encourages remote users to orient the robot towards local users.
  •  
26.
  • Liu, Fei, et al. (författare)
  • Detection of Façade Regions in Street View Images from Split-and-Merge of Perspective Patches
  • 2014
  • Ingår i: Journal of Image and Graphics. - San Jose, CA, USA : Engineering and Technology Publishing. - 2301-3699. ; 2:1, s. 8-14
  • Tidskriftsartikel (refereegranskat)abstract
    • Identification of building façades from digital images is one of the central problems in mobile augmented reality (MAR) applications in the built environment. Directly analyzing the whole image can increase the difficulty of façade identification due to the presence of image portions which are not façade. This paper presents an automatic approach to façade region detection given a single street view image as a pre-processing step to subsequent steps of façade identification. We devise a coarse façade region detection method based on the observation that façades are image regions with repetitive patterns containing a large amount of vertical and horizontal line segments. Firstly, scan lines are constructed from vanishing points and center points of image line segments. Hue profiles along these lines are then analyzed and used to decompose the image into rectilinear patches with similar repetitive patterns. Finally, patches are merged into larger coherent regions and the main building façade region is chosen based on the occurrence of horizontal and vertical line segments within each of the merged regions. A validation of our method showed that on average façade regions are detected in conformity with manually segmented images as ground truth.
  •  
27.
  • Lu, Zhihan, et al. (författare)
  • Anaglyph 3D stereoscopic visualization of 2D video based on fundamental matrix
  • 2013
  • Ingår i: 2013 International Conferenceon Virtual Reality and Visualization. - : IEEE. - 9780769551500 - 9781479923229 ; , s. 305-308
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we propose a simple Anaglyph 3Dstereo generation algorithm from 2D video sequence with monocularcamera. In our novel approach we employ camera poseestimation method to directly generate stereoscopic 3D from 2Dvideo without building depth map explicitly. Our cost effectivemethod is suitable for arbitrary real-world video sequence andproduces smooth results. We use image stitching based on planecorrespondence using fundamental matrix. To this end we alsodemonstrate that correspondence plane image stitching based onHomography matrix only cannot generate better result. Furthermore,we utilize the structure from motion (with fundamentalmatrix) based reconstructed camera pose model to accomplishvisual anaglyph 3D illusion. The proposed approach demonstratesa very good performance for most of the video sequences.
  •  
28.
  • Mosiello, Giovanni, et al. (författare)
  • Using augmented reality to improve usability of the user interface for driving a telepresence robot
  • 2013
  • Ingår i: Paladyn - Journal of Behavioral Robotics. - : DeGruyter. - 2080-9778 .- 2081-4836. ; 4:3, s. 174-181
  • Tidskriftsartikel (refereegranskat)abstract
    • Mobile Robotic Telepresence (MRP) helps people to communicate in natural ways despite being physically located in different parts of the world. User interfaces of such systems are as critical as the design and functionality of the robot itself for creating conditions for natural interaction. This article presents an exploratory study analysing different robot teleoperation interfaces. The goals of this paper are to investigate the possible effect of using augmented reality as the means to drive a robot, to identify key factors of the user interface in order to improve the user experience through a driving interface, and to minimize interface familiarization time for non-experienced users. The study involved 23 participants whose robot driving attempts via different user interfaces were analysed. The results show that a user interface with an augmented reality interface resulted in better driving experience.
  •  
29.
  • Patrignani, Norberto (författare)
  • From computer ethics to future (and information) ethics : The challenge of Nano-Bots
  • 2014
  • Ingår i: Ethical dimensions of bio-nanotechnology. - Hershey, PA, USA : IGI Global. - 9781466618947
  • Bokkapitel (refereegranskat)abstract
    • One of the emerging technologies that is getting a lot of attention is nano-technology. In particular,in this area, the convergence of research fields of biotechnology, information technology, nanotechnologyand neuroscience (or cognitive science) is introducing nano-robots, or nano-bots. These machines promise to lead to the development of a large number potential applications in medicine,but at the same time they raise also a lot of social and ethical issues. This chapter introduces severalways to start an ethical reflection in relation to nano-bots. The traditional "computer ethics"approach and the new "future ethics" proposition are both discussed and applied to this technology.The challenges introduced by nano-bots are so complex that it is possible that the application of thePrecautionary Principle would be required. A further ethical analysis of nano-bots applications inmedicine may benefit from new methodologies and strategies such as the stakeholders' network andFloridi's "entropy (the evil of Infosphere)" concept.
  •  
30.
  • Romero, Mario, 1973-, et al. (författare)
  • Evaluating Video Visualizations of Human Behavior
  • 2011
  • Ingår i: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. - New York, NY, USA : ACM. ; , s. 1441-1450
  • Konferensbidrag (refereegranskat)abstract
    • Previously, we presented Viz-A-Vis, a VIsualiZation of Activity through computer VISion [17]. Viz-A-Vis visualizes behavior as aggregate motion over observation space. In this paper, we present two complementary user studies of Viz-A-Vis measuring its performance and discovery affordances. First, we present a controlled user study aimed at comparatively measuring behavioral analysis preference and performance for observation and search tasks. Second, we describe a study with architects measuring discovery affordances and potential impacts on their work practices. We conclude: 1) Viz-A-Vis significantly reduced search time; and 2) it increased the number and quality of insightful discoveries.
  •  
31.
  • Seipel, Stefan (författare)
  • Evaluating 2D and 3D geovisualisations for basic spatial assessment
  • 2013
  • Ingår i: Behavior and Information Technology. - 0144-929X .- 1362-3001. ; 32:8, s. 845-858
  • Tidskriftsartikel (refereegranskat)abstract
    • This study investigates the use of 2D and 3D presentations of maps for the assessment of distances in a geographical context. Different types of 3D representations have been studied: A weak 3D visualisation that provides static monocular depth cues and a strong 3D visualisation that uses stereoscopic and kinetic depth cues. Two controlled experiments were conducted to test hypotheses regarding subjects' efficiency in visually identifying the shortest distance among a set of market locations in a map. As a general result, we found that participants were able to correctly identify shortest distances when the difference to potential alternatives was sufficiently large, but performance decreased systematically when this difference decreased. Noticeable differences emerged for the investigated visualisation conditions. Participants in this study were equally efficient when using a weak 3D representation and a 2D representation. When the strong 3D visualisation was employed, they reported visual discomfort and tasks solved were significantly less correct. Presentations of intrinsic 2D content (maps) in 3D context did not, in this study, benefit from cues provided by a strong 3D visualisation and are adequately implemented using a weak 3D visualisation.
  •  
32.
  • Seipel, Stefan, et al. (författare)
  • Solving combined geospatial tasks using 2D and 3D bar charts
  • 2012
  • Ingår i: Information Visualisation (IV), 2012 16th International Conference. - 9780769547718 ; , s. 157-163
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents a user study that investigates 2D and 3D visualizations of bar charts in geographic maps. The task to be solved by the participants in this study required estimation of the ratio of two different spatial distance measures and relative ranking among potential candidates. The results of this experiment show that subjects were equally fast and accurate when using both the 2D and 3D visualizations. Visual discomfort was reported by almost half of the test population, but performance was not affected. Our study also showed that frequent game players did not benefit more from a 3D visualization than inexperienced game-players.
  •  
33.
  • Shin, Grace, et al. (författare)
  • VizKid : A Behavior Capture and Visualization System of Adult-child Interaction
  • 2011
  • Ingår i: Proceedings of the 1st International Conference on Human Interface and the Management of Information. ; , s. 190-198
  • Konferensbidrag (refereegranskat)abstract
    • We present VizKid, a capture and visualization system for supporting the analysis of social interactions between two individuals. The development of this system is motivated by the need for objective measures of social approach and avoidance behaviors of children with autism. VizKid visualizes the position and orientation of an adult and a child as they interact with one another over an extended period of time. We report on the design of VizKid and its rationale.
  •  
34.
  •  
35.
  • ur Réhman, Shafiq, 1978- (författare)
  • Expressing emotions through vibration for perception and control
  • 2010
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis addresses a challenging problem: “how to let the visually impaired ‘see’ others emotions”. We, human beings, are heavily dependent on facial expressions to express ourselves. A smile shows that the person you are talking to is pleased, amused, relieved etc. People use emotional information from facial expressions to switch between conversation topics and to determine attitudes of individuals. Missing emotional information from facial expressions and head gestures makes the visually impaired extremely difficult to interact with others in social events. To enhance the visually impaired’s social interactive ability, in this thesis we have been working on the scientific topic of ‘expressing human emotions through vibrotactile patterns’. It is quite challenging to deliver human emotions through touch since our touch channel is very limited. We first investigated how to render emotions through a vibrator. We developed a real time “lipless” tracking system to extract dynamic emotions from the mouth and employed mobile phones as a platform for the visually impaired to perceive primary emotion types. Later on, we extended the system to render more general dynamic media signals: for example, render live football games through vibration in the mobile for improving mobile user communication and entertainment experience. To display more natural emotions (i.e. emotion type plus emotion intensity), we developed the technology to enable the visually impaired to directly interpret human emotions. This was achieved by use of machine vision techniques and vibrotactile display. The display is comprised of a ‘vibration actuators matrix’ mounted on the back of a chair and the actuators are sequentially activated to provide dynamic emotional information. The research focus has been on finding a global, analytical, and semantic representation for facial expressions to replace state of the art facial action coding systems (FACS) approach. We proposed to use the manifold of facial expressions to characterize dynamic emotions. The basic emotional expressions with increasing intensity become curves on the manifold extended from the center. The blends of emotions lie between those curves, which could be defined analytically by the positions of the main curves. The manifold is the “Braille Code” of emotions. The developed methodology and technology has been extended for building assistive wheelchair systems to aid a specific group of disabled people, cerebral palsy or stroke patients (i.e. lacking fine motor control skills), who don’t have ability to access and control the wheelchair with conventional means, such as joystick or chin stick. The solution is to extract the manifold of the head or the tongue gestures for controlling the wheelchair. The manifold is rendered by a 2D vibration array to provide user of the wheelchair with action information from gestures and system status information, which is very important in enhancing usability of such an assistive system. Current research work not only provides a foundation stone for vibrotactile rendering system based on object localization but also a concrete step to a new dimension of human-machine interaction.
  •  
36.
  • Yun, Yixiao, 1987, et al. (författare)
  • Multi-View ML Object Tracking with Online Learning on Riemannian Manifolds by Combining Geometric Constraints
  • 2013
  • Ingår i: IEEE Journal on Emerging and Selected Topics in Circuits and Systems. - 2156-3365 .- 2156-3357. ; 3:2, s. 12 -197
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper addresses issues in object tracking with occlusion scenarios, where multiple uncalibrated cameras with overlapping fields of view are exploited. We propose a novel method where tracking is first done independently in each individual view and then tracking results are mapped from different views to improve the tracking jointly. The proposed tracker uses the assumptions that objects are visible in at least one view and move uprightly on a common planar ground that may induce a homography relation between views. A method for online learning of object appearances on Riemannian manifolds is also introduced. The main novelties of the paper include: (a) define a similarity measure, based on geodesics between a candidate object and a set of mapped references from multiple views on a Riemannian manifold; (b) propose multiview maximum likelihood (ML) estimation of object bounding box parameters, based on Gaussian-distributed geodesics on the manifold; (c) introduce online learning of object appearances on the manifold, taking into account of possible occlusions; (d) utilize projective transformations for objects between views, where parameters are estimated from warped vertical axis by combining planar homography, epipolar geometry and vertical vanishing point; (e) embed single-view trackers in a three-layer multi-view tracking scheme. Experiments have been conducted on videos from multiple uncalibrated cameras, where objects containlong-term partial/full occlusions, or frequent intersections. Comparisons have been made with three existing methods, where the performance is evaluated both qualitatively and quantitatively. Results have shown the effectiveness of the proposed method in terms of robustness against tracking drift caused by occlusions.
  •  
37.
  • Åhlén, Julia, et al. (författare)
  • Knowledge Based Single Building Extraction and Recognition
  • 2014
  • Ingår i: Proceedings WSEAS International Conference on Computer Engineering and Applications, 2014. - : WSEAS Press. - 9789604743612 ; , s. 29-35
  • Konferensbidrag (refereegranskat)abstract
    • Building facade extraction is the primary step in the recognition process in outdoor scenes. It is also achallenging task since each building can be viewed from different angles or under different lighting conditions. Inoutdoor imagery, regions, such as sky, trees, pavement cause interference for a successful building facade recognition.In this paper we propose a knowledge based approach to automatically segment out the whole facade or majorparts of the facade from outdoor scene. The found building regions are then subjected to recognition process. Thesystem is composed of two modules: segmentation of building facades region module and facade recognition module.In the facade segmentation module, color processing and objects position coordinates are used. In the facaderecognition module, Chamfer metrics are applied. In real time recognition scenario, the image with a building isfirst analyzed in order to extract the facade region, which is then compared to a database with feature descriptors inorder to find a match. The results show that the recognition rate is dependent on a precision of building extractionpart, which in turn, depends on a homogeneity of colors of facades.
  •  
38.
  • Björkman, Mårten, 1970-, et al. (författare)
  • Enhancing Visual Perception of Shape through Tactile Glances
  • 2013
  • Ingår i: Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. - : IEEE conference proceedings. - 2153-0866 .- 2153-0858. - 9781467363587 ; , s. 3180-3186
  • Konferensbidrag (refereegranskat)abstract
    • Object shape information is an important parameter in robot grasping tasks. However, it may be difficult to obtain accurate models of novel objects due to incomplete and noisy sensory measurements. In addition, object shape may change due to frequent interaction with the object (cereal boxes, etc). In this paper, we present a probabilistic approach for learning object models based on visual and tactile perception through physical interaction with an object. Our robot explores unknown objects by touching them strategically at parts that are uncertain in terms of shape. The robot starts by using only visual features to form an initial hypothesis about the object shape, then gradually adds tactile measurements to refine the object model. Our experiments involve ten objects of varying shapes and sizes in a real setup. The results show that our method is capable of choosing a small number of touches to construct object models similar to real object shapes and to determine similarities among acquired models.
  •  
39.
  •  
40.
  • Hegarty, Peter, 1971, et al. (författare)
  • A variant of the multi-agent rendezvous problem
  • 2013
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • The classical multi-agent rendezvous problem asks for a deterministic algorithm by which $n$ points scattered in a plane can move about at constant speed and merge at a single point, assuming each point can use only the locations of the others it sees when making decisions and that the visibility graph as a whole is connected. In time complexity analyses of such algorithms, only the number of rounds of computation required are usually considered, not the amount of computation done per round. In this paper, we consider $\Omega(n^2 \log n)$ points distributed independently and uniformly at random in a disc of radius $n$ and, assuming each point can not only see but also, in principle, communicate with others within unit distance, seek a randomised merging algorithm which asymptotically almost surely (a.a.s.) runs in time $O(n)$, in other words in time linear in the radius of the disc rather than in the number of points. Under a precise set of assumptions concerning the communication capabilities of neighboring points, we describe an algorithm which a.a.s. runs in time $O(n)$ provided the number of points is $o(n^3)$. Several questions are posed for future work.
  •  
41.
  •  
42.
  •  
43.
  • Rødseth, Ørnulf Jan, et al. (författare)
  • Communication Architecture for an Unmanned Merchant Ship
  • 2013
  • Ingår i: OCEANS - Bergen, 2013 MTS/IEEE. - 9781479900008 ; :2013, s. 1-9
  • Konferensbidrag (refereegranskat)abstract
    • Unmanned ships is an interesting proposal toimplement slow steaming and saving fuel while avoiding that the crew has to stay on board for very long deep-sea passages. To maintain efficiency and safety, autonomy has to be implemented to enable the ship to operate without requiring the SCC to continuously control the ship. Communication between ship and a shore control center (SCC) is therefore critical for the unmanned ship and proper safety and security precautions are required, including sufficient redundancy and backup solutions.Communication systems should be able to supply at least 4Megabits/second for full remote control from SCC, but reduced operation can be maintained at down to 125 kilobits/second. In autonomous mode, the required communication bandwidth will be very low. For an autonomous ship the higher bandwidth requirements are from ship to shore which is the opposite of the situation for normal ships. Cost and availability of communication is an issue. The use of technical and functionalindexes will enable the SCC to monitor the status of the ship at minimum load on operators and on the communication systems. The security and integrity of information transfers is crucial and appropriate means must be taken to ensure failure tolerance and fail to safe properties of the system.
  •  
44.
  • Sarve, Hamid, 1981-, et al. (författare)
  • Cuanto: A Tool for Quantification of Bone Tissue in the Proximityof Implants
  • 2011
  • Ingår i: Proceedings of SSBA 2011.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Abstract—Quantitative analysis of histological images ofbone-implant samples is a important step in the evaluationof bone-implant integration. However, the quantificationis a tedious task when carried out manually in the lightmicroscope. To automatize the quantification, Cuanto, asoftware for measurements of bone area and estimation ofbone-implant contact length in histological images of boneimplantsamples has been developed. The quantificationresult of the software is compared to manual measurements;area measurements correspond well with the manualquantification whereas significant differences in the lengthestimation is observed. The possibility of zooming in downto cell-level when quantifying manually is believed to renderthe discrepancy.
  •  
45.
  • Svensson, Lennart, 1980-, et al. (författare)
  • Rigid template registration in MET images using CUDA
  • 2012
  • Ingår i: VISAPP 2012. - Rome : SciTePress. - 9789898565037 ; 2, s. 418-422
  • Konferensbidrag (refereegranskat)abstract
    • Rigid registration is a basic tool in many applications, especially in Molecular Electron Tomography (MET), and also in, e.g., registration of rigid implants in medical images and as initialization for deformable registration. As MET volumes have a low signal to noise ratio, a complete search of the six-dimensional (6D) parameter space is often employed. In this paper, we describe how rigid registration with normalized cross-correlation can be implemented on the GPU using NVIDIA's parallel computing architecture CUDA. We compare the performance to the Colores software and two Matlab implementations, one of which is using the GPU accelerated JACKET library. With well-aligned padding and using CUDA, the performance increases by an order of a magnitude, making it feasible to work with three-dimensional fitness landscapes, here denoted scoring volumes, that are generated on the fly. This will eventually enable the biologists to interactively register macromolecule chains in MET volumes piece by piece.
  •  
46.
  • Viña, Francisco, 1990-, et al. (författare)
  • Predicting Slippage and Learning Manipulation Affordances through Gaussian Process Regression
  • 2013
  • Ingår i: IEEE-RAS International Conference on Humanoid Robots. - : IEEE Computer Society. - 2164-0580 .- 2164-0572. ; , s. 462-468
  • Konferensbidrag (refereegranskat)abstract
    • Object grasping is commonly followed by some form of object manipulation - either when using the grasped object as a tool or actively changing its position in the hand through in-hand manipulation to afford further interaction. In this process, slippage may occur due to inappropriate contact forces, various types of noise and/or due to the unexpected interaction or collision with the environment. In this paper, we study the problem of identifying continuous bounds on the forces and torques that can be applied on a grasped object before slippage occurs. We model the problem as kinesthetic rather than cutaneous learning given that the measurements originate from a wrist mounted force-torque sensor. Given the continuous output, this regression problem is solved using a Gaussian Process approach. We demonstrate a dual armed humanoid robot that can autonomously learn force and torque bounds and use these to execute actions on objects such as sliding and pushing. We show that the model can be used not only for the detection of maximum allowable forces and torques but also for potentially identifying what types of tasks, denoted as manipulation affordances, a specific grasp configuration allows. The latter can then be used to either avoid specific motions or as a simple step of achieving in-hand manipulation of objects through interaction with the environment.
  •  
47.
  • Wang, Gaihua, et al. (författare)
  • A quaternion-based switching filter for colour image denoising
  • 2014
  • Ingår i: Signal Processing. - : Elsevier. - 0165-1684 .- 1872-7557. ; 102, s. 216-225
  • Tidskriftsartikel (refereegranskat)abstract
    • An improved quaternion switching filter for colour image denoising is presented. It proposes a RGB colour image as a pure quaternion form and measures differences between two colour pixels with the quaternion-based distance. Further, in noise-detection, a two-stage detection method is proposed to determine whether the current pixel is noise or not. The noisy pixels are replaced by the vector median filter (VMF) output and the noise-free ones are unchanged. Finally, we combine the advantages of quaternion-based switching filter and non-local means filter to remove mixture noise. By comparing the performance and computing time processing different images, the proposed method has superior performance which not only provides the best noise suppression results but also yields better image quality compared to other widely used filters.
  •  
48.
  • Andersson, Carina, 1970- (författare)
  • Informationsdesign i tillståndsövervakning : En studie av ett bildskärmsbaserat användargränssnitt för tillståndsövervakning och tillståndsbaserat underhåll
  • 2010
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This research concerns the information design and visual design of graphical user interfaces (GUI) in the condition monitoring and condition-based maintenance (CBM) of production equipment. It also concerns various communicative aspects of a GUI, which is used to monitor the condition of assets. It applies to one Swedish vendor and its intentions to design information. In addition, it applies to the interaction between the GUI and its individual visual elements, as well as the communication between the GUI and the users (in four Swedish paper mills).The research is performed as a single case study. Interviews and observations have been the main methods for data collection. Empirical data is analyzed with methods inferred to semiotics, rhetoric and narratology. Theories in information science and regarding remediation are used to interpret the user interface design.The key conclusion is that there are no less than five different forms of information, all important when determining the conditions of assets. These information forms include the words, images and shapes in the GUI, the machine components and peripherals equipment, the information that takes form when personnel communicate machine conditions, the personnel’s subjective associations, and the information forms that relate to the personnel's actions and interactions.Preventive technicians interpret the GUI-information individually and collectively in relation to these information forms, which influence their interpretation and understanding of the GUI information. Social media in the GUI makes it possible to represent essential information that takes form when employees communicate a machine’s condition. Photographs may represent information forms as a machine’s components, peripherals, and local environment change over time. Moreover, preventative technicians may use diagrams and photographs in the GUI to change attitudes among the personnel at the mills and convince them, for example, of a machine’s condition or the effectiveness of CBM as maintenance policy.
  •  
49.
  • Bengtsson, Tomas, 1983, et al. (författare)
  • Super-resolution reconstruction of high dynamic range images in a perceptually uniform domain
  • 2013
  • Ingår i: Optical Engineering. - 1560-2303 .- 0091-3286. ; 52:10
  • Tidskriftsartikel (refereegranskat)abstract
    • Super resolution is a signal processing method that utilizesinformation from multiple degraded images of the same scene in order to reconstruct an image with enhanced spatial resolution. The method is typically employed on similarly exposed pixel valued images, but it can be extended to differently exposed images with a high combined dynamicrange. We propose a novel formulation of the joint super-resolution, high dynamic range image reconstruction problem, using an image domain in which the residual function of the inverse problem relates to the perception of the human visual system. Simulated results are presented,including a comparison with a conventional method, demonstrating that the proposed approach avoids some severe reconstruction artifacts.
  •  
50.
  • Bengtsson, Tomas, 1983, et al. (författare)
  • Super-resolution reconstruction of high dynamic range images with perceptual weighting of errors
  • 2013
  • Ingår i: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. - 1520-6149. - 9781479903566 ; , s. 2212-2216
  • Konferensbidrag (refereegranskat)abstract
    • Super-Resolution and High Dynamic Range image reconstruction are two different signal processing techniques that share in common that they utilize information from multiple observations of the same scene to enhance visual image quality. In this paper, both techniques are merged in a common model, and the focus is to solve the reconstruction problem in a suitable image domain, which relates to the perception of the Human Visual System. Simulated results are presented, including a comparison with a conventional method, demonstrating the benefits of the proposed approach, in this case avoiding some severe reconstruction artifacts.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 757
Typ av publikation
konferensbidrag (457)
tidskriftsartikel (165)
bokkapitel (47)
doktorsavhandling (32)
licentiatavhandling (20)
rapport (15)
visa fler...
annan publikation (7)
samlingsverk (redaktörskap) (5)
forskningsöversikt (3)
bok (2)
proceedings (redaktörskap) (2)
patent (1)
recension (1)
visa färre...
Typ av innehåll
refereegranskat (604)
övrigt vetenskapligt/konstnärligt (149)
populärvet., debatt m.m. (4)
Författare/redaktör
Kragic, Danica (54)
Gu, Irene Yu-Hua, 19 ... (45)
Balkenius, Christian (33)
Ek, Carl Henrik (23)
Johnsson, Magnus (23)
Kahl, Fredrik (19)
visa fler...
Hotz, Ingrid (18)
Nalpantidis, Lazaros (18)
Khan, Zulfiqar Hasan ... (18)
Pham, Tuan D. (17)
Olsson, Carl (15)
Gasteratos, Antonios (15)
Kjellström, Hedvig (14)
Mehnert, Andrew, 196 ... (14)
Åström, Karl (13)
Heyden, Anders (13)
Yun, Yixiao, 1987 (13)
Bååth, Rasmus (12)
Seipel, Stefan (12)
Nyström, Ingela (12)
Strandmark, Petter (12)
Sintorn, Ida-Maria (11)
Lindeberg, Tony, 196 ... (11)
Lindblad, Joakim (11)
Felsberg, Michael (11)
Bekiroglu, Yasemin, ... (11)
Oskarsson, Magnus (10)
Li, Haibo (9)
Ulen, Johannes (9)
Yang, Jie (9)
Lowe, Robert (9)
Brun, Anders, 1976- (9)
Pokorny, Florian T. (9)
Johansson, Birger (9)
Fu, Keren, 1988 (9)
Smedby, Örjan (8)
Hast, Anders (8)
Carlsson, Stefan (8)
Lenz, Reiner (8)
Sladoje, Nataša (8)
Kennedy, Dominic (8)
Kuang, Yubin (8)
ur Réhman, Shafiq (8)
Brun, Anders (8)
Strand, Robin (7)
Sullivan, Josephine (7)
Bengtsson, Ewert (7)
Pronobis, Andrzej (7)
Song, Dan (7)
Luengo, Cris (7)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (156)
Lunds universitet (142)
Chalmers tekniska högskola (134)
Linköpings universitet (130)
Uppsala universitet (113)
Sveriges Lantbruksuniversitet (45)
visa fler...
Örebro universitet (34)
Göteborgs universitet (30)
Umeå universitet (26)
Högskolan i Skövde (17)
Högskolan i Halmstad (16)
Högskolan i Gävle (13)
Luleå tekniska universitet (7)
Mälardalens universitet (7)
Linnéuniversitetet (6)
RISE (4)
Högskolan Väst (3)
Mittuniversitetet (3)
Stockholms universitet (2)
Malmö universitet (2)
Högskolan Dalarna (2)
Karolinska Institutet (1)
Blekinge Tekniska Högskola (1)
visa färre...
Språk
Engelska (750)
Svenska (6)
Franska (1)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (757)
Teknik (179)
Humaniora (35)
Samhällsvetenskap (33)
Medicin och hälsovetenskap (32)
Lantbruksvetenskap (10)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy