SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "AMNE:(NATURVETENSKAP Data- och informationsvetenskap Datorseende och robotik) "

Sökning: AMNE:(NATURVETENSKAP Data- och informationsvetenskap Datorseende och robotik)

  • Resultat 1-50 av 3964
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Chatterjee, Bapi, 1982 (författare)
  • Lock-free Concurrent Search
  • 2017
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The contemporary computers typically consist of multiple computing cores with high compute power. Such computers make excellent concurrent asynchronous shared memory system. On the other hand, though many celebrated books on data structure and algorithm provide a comprehensive study of sequential search data structures, unfortunately, we do not have such a luxury if concurrency comes in the setting. The present dissertation aims to address this paucity. We describe novel lock-free algorithms for concurrent data structures that target a variety of search problems. (i) Point search (membership query, predecessor query, nearest neighbour query) for 1-dimensional data: Lock-free linked-list; lock-free internal and external binary search trees (BST). (ii) Range search for 1-dimensional data: A range search method for lock-free ordered set data structures - linked-list, skip-list and BST. (iii) Point search for multi-dimensional data: Lock-free kD-tree, specially, a generic method for nearest neighbour search. We prove that the presented algorithms are linearizable i.e. the concurrent data structure operations intuitively display their sequential behaviour to an observer of the concurrent system. The lock-freedom in the introduced algorithms guarantee overall progress in an asynchronous shared memory system. We present the amortized analysis of lock-free data structures to show their efficiency. Moreover, we provide sample implementations of the algorithms and test them over extensive micro-benchmarks. Our experiments demonstrate that the implementations are scalable and perform well when compared to related existing alternative implementations on common multi-core computers. Our focus is on propounding the generic methodologies for efficient lock-free concurrent search. In this direction, we present the notion of help-optimality, which captures the optimization of amortized step complexity of the operations. In addition to that, we explore the language-portable design of lock-free data structures that aims to simplify an implementation from programmer’s point of view. Finally, our techniques to implement lock-free linearizable range search and nearest neighbour search are independent of the underlying data structures and thus are adaptive to similar data structures.
  •  
2.
  • Liu, Yuanhua, 1971, et al. (författare)
  • Considering the importance of user profiles in interface design
  • 2009
  • Ingår i: User Interfaces. ; , s. 23-
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • User profile is a popular term widely employed during product design processes by industrial companies. Such a profile is normally intended to represent real users of a product. The ultimate purpose of a user profile is actually to help designers to recognize or learn about the real user by presenting them with a description of a real user’s attributes, for instance; the user’s gender, age, educational level, attitude, technical needs and skill level. The aim of this chapter is to provide information on the current knowledge and research about user profile issues, as well as to emphasize the importance of considering these issues in interface design. In this chapter, we mainly focus on how users’ difference in expertise affects their performance or activity in various interaction contexts. Considering the complex interaction situations in practice, novice and expert users’ interactions with medical user interfaces of different technical complexity will be analyzed as examples: one focuses on novice and expert users’ difference when interacting with simple medical interfaces, and the other focuses on differences when interacting with complex medical interfaces. Four issues will be analyzed and discussed: (1) how novice and expert users differ in terms of performance during the interaction; (2) how novice and expert users differ in the perspective of cognitive mental models during the interaction; (3) how novice and expert users should be defined in practice; and (4) what are the main differences between novice and expert users’ implications for interface design. Besides describing the effect of users’ expertise difference during the interface design process, we will also pinpoint some potential problems for the research on interface design, as well as some future challenges that academic researchers and industrial engineers should face in practice.
  •  
3.
  • Amundin, Mats, et al. (författare)
  • A proposal to use distributional models to analyse dolphin vocalisation
  • 2017
  • Ingår i: Proceedings of the 1st International Workshop on Vocal Interactivity in-and-between Humans, Animals and Robots, VIHAR 2017. - 9782956202905 ; , s. 31-32
  • Konferensbidrag (refereegranskat)abstract
    • This paper gives a brief introduction to the starting points of an experimental project to study dolphin communicative behaviour using distributional semantics, with methods implemented for the large scale study of human language.
  •  
4.
  • Blanch, Krister, 1991 (författare)
  • Beyond-application datasets and automated fair benchmarking
  • 2023
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Beyond-application perception datasets are generalised datasets that emphasise the fundamental components of good machine perception data. When analysing the history of perception datatsets, notable trends suggest that design of the dataset typically aligns with an application goal. Instead of focusing on a specific application, beyond-application datasets instead look at capturing high-quality, high-volume data from a highly kinematic environment, for the purpose of aiding algorithm development and testing in general. Algorithm benchmarking is a cornerstone of autonomous systems development, and allows developers to demonstrate their results in a comparative manner. However, most benchmarking systems allow developers to use their own hardware or select favourable data. There is also little focus on run time performance and consistency, with benchmarking systems instead showcasing algorithm accuracy. By combining both beyond-application dataset generation and methods for fair benchmarking, there is also the dilemma of how to provide the dataset to developers for this benchmarking, as the result of a high-volume, high-quality dataset generation is a significant increase in dataset size when compared to traditional perception datasets. This thesis presents the first results of attempting the creation of such a dataset. The dataset was built using a maritime platform, selected due to the highly dynamic environment presented on water. The design and initial testing of this platform is detailed, as well as as methods of sensor validation. Continuing, the thesis then presents a method of fair benchmarking, by utilising remote containerisation in a way that allows developers to present their software to the dataset, instead of having to first locally store a copy. To test this dataset and automatic online benchmarking, a number of reference algorithms were required for initial results. Three algorithms were built, using the data from three different sensors captured on the maritime platform. Each algorithm calculates vessel odometry, and the automatic benchmarking system was utilised to show the accuracy and run-time performance of these algorithms. It was found that the containerised approach alleviated data management concerns, prevented inflated accuracy results, and demonstrated precisely how computationally intensive each algorithm was.
  •  
5.
  • Yun, Yixiao, 1987, et al. (författare)
  • Maximum-Likelihood Object Tracking from Multi-View Video by Combining Homography and Epipolar Constraints
  • 2012
  • Ingår i: 6th ACM/IEEE Int'l Conf on Distributed Smart Cameras (ICDSC 12), Oct 30 - Nov.2, 2012, Hong Kong. - 9781450317726 ; , s. 6 pages-
  • Konferensbidrag (refereegranskat)abstract
    • This paper addresses problem of object tracking in occlusion scenarios, where multiple uncalibrated cameras with overlapping fields of view are used. We propose a novel method where tracking is first done independently for each view and then tracking results are mapped between each pair of views to improve the tracking in individual views, under the assumptions that objects are not occluded in all views and move uprightly on a planar ground which may induce a homography relation between each pair of views. The tracking results are mapped by jointly exploiting the geometric constraints of homography, epipolar and vertical vanishing point. Main contributions of this paper include: (a) formulate a reference model of multi-view object appearance using region covariance for each view; (b) define a likelihood measure based on geodesics on a Riemannian manifold that is consistent with the destination view by mapping both the estimated positions and appearances of tracked object from other views; (c) locate object in each individual view based on maximum likelihood criterion from multi-view estimations of object position. Experiments have been conducted on videos from multiple uncalibrated cameras, where targets experience long-term partial or full occlusions. Comparison with two existing methods and performance evaluations are also made. Test results have shown effectiveness of the proposed method in terms of robustness against tracking drifts caused by occlusions.
  •  
6.
  • Schötz, Susanne, et al. (författare)
  • Phonetic Characteristics of Domestic Cat Vocalisations
  • 2017
  • Ingår i: Proceedings of the 1st International Workshop on Vocal Interactivity in-and-between Humans, Animals and Robots, VIHAR 2017. - 9782956202905 ; , s. 5-6
  • Konferensbidrag (refereegranskat)abstract
    • The cat (Felis catus, Linneaus 1758) has lived around or with humans for at least 10,000 years, and is now one of the most popular pets of the world with more than 600 millionindividuals. Domestic cats have developed a more extensive, variable and complex vocal repertoire than most other members of the Carnivora, which may be explained by their social organisation, their nocturnal activity and the long period of association between mother and young. Still, we know surprisingly little about the phonetic characteristics of these sounds, and about the interaction between cats and humans.Members of the research project Melody in human–cat communication (Meowsic) investigate the prosodic characteristics of cat vocalisations as well as the communication between human and cat. The first step includes a categorisation of cat vocalisations. In the next step it will be investigated how humans perceive the vocal signals of domestic cats. This paper presents an outline of the project which has only recently started.
  •  
7.
  • Ali, Muhaddisa Barat, 1986 (författare)
  • Deep Learning Methods for Classification of Gliomas and Their Molecular Subtypes, From Central Learning to Federated Learning
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The most common type of brain cancer in adults are gliomas. Under the updated 2016 World Health Organization (WHO) tumor classification in central nervous system (CNS), identification of molecular subtypes of gliomas is important. For low grade gliomas (LGGs), prediction of molecular subtypes by observing magnetic resonance imaging (MRI) scans might be difficult without taking biopsy. With the development of machine learning (ML) methods such as deep learning (DL), molecular based classification methods have shown promising results from MRI scans that may assist clinicians for prognosis and deciding on a treatment strategy. However, DL requires large amount of training datasets with tumor class labels and tumor boundary annotations. Manual annotation of tumor boundary is a time consuming and expensive process. The thesis is based on the work developed in five papers on gliomas and their molecular subtypes. We propose novel methods that provide improved performance.  The proposed methods consist of a multi-stream convolutional autoencoder (CAE)-based classifier, a deep convolutional generative adversarial network (DCGAN) to enlarge the training dataset, a CycleGAN to handle domain shift, a novel federated learning (FL) scheme to allow local client-based training with dataset protection, and employing bounding boxes to MRIs when tumor boundary annotations are not available. Experimental results showed that DCGAN generated MRIs have enlarged the original training dataset size and have improved the classification performance on test sets. CycleGAN showed good domain adaptation on multiple source datasets and improved the classification performance. The proposed FL scheme showed a slightly degraded performance as compare to that of central learning (CL) approach while protecting dataset privacy. Using tumor bounding boxes showed to be an alternative approach to tumor boundary annotation for tumor classification and segmentation, with a trade-off between a slight decrease in performance and saving time in manual marking by clinicians. The proposed methods may benefit the future research in bringing DL tools into clinical practice for assisting tumor diagnosis and help the decision making process.
  •  
8.
  • Isaksson, Martin, et al. (författare)
  • Adaptive Expert Models for Federated Learning
  • 2023
  • Ingår i: <em>Lecture Notes in Computer Science </em>Volume 13448 Pages 1 - 16 2023. - Cham : Springer Science and Business Media Deutschland GmbH. - 9783031289958 ; 13448 LNAI, s. 1-16
  • Konferensbidrag (refereegranskat)abstract
    • Federated Learning (FL) is a promising framework for distributed learning when data is private and sensitive. However, the state-of-the-art solutions in this framework are not optimal when data is heterogeneous and non-IID. We propose a practical and robust approach to personalization in FL that adjusts to heterogeneous and non-IID data by balancing exploration and exploitation of several global models. To achieve our aim of personalization, we use a Mixture of Experts (MoE) that learns to group clients that are similar to each other, while using the global models more efficiently. We show that our approach achieves an accuracy up to 29.78% better than the state-of-the-art and up to 4.38% better compared to a local model in a pathological non-IID setting, even though we tune our approach in the IID setting. © 2023, The Author(s)
  •  
9.
  • Lv, Zhihan, Dr. 1984-, et al. (författare)
  • Editorial : 5G for Augmented Reality
  • 2022
  • Ingår i: Mobile Networks and Applications. - : Springer. - 1383-469X .- 1572-8153.
  • Tidskriftsartikel (refereegranskat)
  •  
10.
  • Norlund, Tobias, 1991, et al. (författare)
  • Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it?
  • 2021
  • Ingår i: Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pp. 149-162, Punta Cana, Dominican Republic. - : Association for Computational Linguistics.
  • Konferensbidrag (refereegranskat)abstract
    • Large language models are known to suffer from the hallucination problem in that they are prone to output statements that are false or inconsistent, indicating a lack of knowledge. A proposed solution to this is to provide the model with additional data modalities that complements the knowledge obtained through text. We investigate the use of visual data to complement the knowledge of large language models by proposing a method for evaluating visual knowledge transfer to text for uni- or multimodal language models. The method is based on two steps, 1) a novel task querying for knowledge of memory colors, i.e. typical colors of well-known objects, and 2) filtering of model training data to clearly separate knowledge contributions. Additionally, we introduce a model architecture that involves a visual imagination step and evaluate it with our proposed method. We find that our method can successfully be used to measure visual knowledge transfer capabilities in models and that our novel model architecture shows promising results for leveraging multimodal knowledge in a unimodal setting.
  •  
11.
  • Fu, Keren, et al. (författare)
  • Deepside: A general deep framework for salient object detection
  • 2019
  • Ingår i: Neurocomputing. - : Elsevier BV. - 0925-2312 .- 1872-8286. ; 356, s. 69-82
  • Tidskriftsartikel (refereegranskat)abstract
    • Deep learning-based salient object detection techniques have shown impressive results compared to con- ventional saliency detection by handcrafted features. Integrating hierarchical features of Convolutional Neural Networks (CNN) to achieve fine-grained saliency detection is a current trend, and various deep architectures are proposed by researchers, including “skip-layer” architecture, “top-down” architecture, “short-connection” architecture and so on. While these architectures have achieved progressive improve- ment on detection accuracy, it is still unclear about the underlying distinctions and connections between these schemes. In this paper, we review and draw underlying connections between these architectures, and show that they actually could be unified into a general framework, which simply just has side struc- tures with different depths. Based on the idea of designing deeper side structures for better detection accuracy, we propose a unified framework called Deepside that can be deeply supervised to incorporate hierarchical CNN features. Additionally, to fuse multiple side outputs from the network, we propose a novel fusion technique based on segmentation-based pooling, which severs as a built-in component in the CNN architecture and guarantees more accurate boundary details of detected salient objects. The effectiveness of the proposed Deepside scheme against state-of-the-art models is validated on 8 benchmark datasets.
  •  
12.
  • Ge, Chenjie, 1991, et al. (författare)
  • Co-Saliency-Enhanced Deep Recurrent Convolutional Networks for Human Fall Detection in E-Healthcare
  • 2018
  • Ingår i: Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. - 1557-170X. ; , s. 1572-1575
  • Konferensbidrag (refereegranskat)abstract
    • This paper addresses the issue of fall detection from videos for e-healthcare and assisted-living. Instead of using conventional hand-crafted features from videos, we propose a fall detection scheme based on co-saliency-enhanced recurrent convolutional network (RCN) architecture for fall detection from videos. In the proposed scheme, a deep learning method RCN is realized by a set of Convolutional Neural Networks (CNNs) in segment-levels followed by a Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), to handle the time-dependent video frames. The co-saliency-based method enhances salient human activity regions hence further improves the deep learning performance. The main contributions of the paper include: (a) propose a recurrent convolutional network (RCN) architecture that is dedicated to the tasks of human fall detection in videos; (b) integrate a co-saliency enhancement to the deep learning scheme for further improving the deep learning performance; (c) extensive empirical tests for performance analysis and evaluation under different network settings and data partitioning. Experiments using the proposed scheme were conducted on an open dataset containing multicamera videos from different view angles, results have shown very good performance (test accuracy 98.96%). Comparisons with two existing methods have provided further support to the proposed scheme.
  •  
13.
  • Gerken, Jan, 1991, et al. (författare)
  • Equivariance versus augmentation for spherical images
  • 2022
  • Ingår i: Proceedings of Machine Learning Resaerch. ; , s. 7404-7421
  • Konferensbidrag (refereegranskat)abstract
    • We analyze the role of rotational equivariance in convolutional neural networks (CNNs) applied to spherical images. We compare the performance of the group equivariant networks known as S2CNNs and standard non-equivariant CNNs trained with an increasing amount of data augmentation. The chosen architectures can be considered baseline references for the respective design paradigms. Our models are trained and evaluated on single or multiple items from the MNIST- or FashionMNIST dataset projected onto the sphere. For the task of image classification, which is inherently rotationally invariant, we find that by considerably increasing the amount of data augmentation and the size of the networks, it is possible for the standard CNNs to reach at least the same performance as the equivariant network. In contrast, for the inherently equivariant task of semantic segmentation, the non-equivariant networks are consistently outperformed by the equivariant networks with significantly fewer parameters. We also analyze and compare the inference latency and training times of the different networks, enabling detailed tradeoff considerations between equivariant architectures and data augmentation for practical problems.
  •  
14.
  • Ge, Chenjie, 1991, et al. (författare)
  • Enlarged Training Dataset by Pairwise GANs for Molecular-Based Brain Tumor Classification
  • 2020
  • Ingår i: IEEE Access. - 2169-3536 .- 2169-3536. ; 8:1, s. 22560-22570
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper addresses issues of brain tumor subtype classification using Magnetic Resonance Images (MRIs) from different scanner modalities like T1 weighted, T1 weighted with contrast-enhanced, T2 weighted and FLAIR images. Currently most available glioma datasets are relatively moderate in size, and often accompanied with incomplete MRIs in different modalities. To tackle the commonly encountered problems of insufficiently large brain tumor datasets and incomplete modality of image for deep learning, we propose to add augmented brain MR images to enlarge the training dataset by employing a pairwise Generative Adversarial Network (GAN) model. The pairwise GAN is able to generate synthetic MRIs across different modalities. To achieve the patient-level diagnostic result, we propose a post-processing strategy to combine the slice-level glioma subtype classification results by majority voting. A two-stage course-to-fine training strategy is proposed to learn the glioma feature using GAN-augmented MRIs followed by real MRIs. To evaluate the effectiveness of the proposed scheme, experiments have been conducted on a brain tumor dataset for classifying glioma molecular subtypes: isocitrate dehydrogenase 1 (IDH1) mutation and IDH1 wild-type. Our results on the dataset have shown good performance (with test accuracy 88.82%). Comparisons with several state-of-the-art methods are also included.
  •  
15.
  • Lv, Zhihan, Dr. 1984-, et al. (författare)
  • 5G for mobile augmented reality
  • 2022
  • Ingår i: International Journal of Communication Systems. - : John Wiley & Sons. - 1074-5351 .- 1099-1131. ; 35:5
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)
  •  
16.
  • Somanath, Sanjay, 1994, et al. (författare)
  • Towards Urban Digital Twins: A Workflow for Procedural Visualization Using Geospatial Data
  • 2024
  • Ingår i: Remote Sensing. - 2072-4292. ; 16:11
  • Tidskriftsartikel (refereegranskat)abstract
    • A key feature for urban digital twins (DTs) is an automatically generated detailed 3D representation of the built and unbuilt environment from aerial imagery, footprints, LiDAR, or a fusion of these. Such 3D models have applications in architecture, civil engineering, urban planning, construction, real estate, Geographical Information Systems (GIS), and many other areas. While the visualization of large-scale data in conjunction with the generated 3D models is often a recurring and resource-intensive task, an automated workflow is complex, requiring many steps to achieve a high-quality visualization. Methods for building reconstruction approaches have come a long way, from previously manual approaches to semi-automatic or automatic approaches. This paper aims to complement existing methods of 3D building generation. First, we present a literature review covering different options for procedural context generation and visualization methods, focusing on workflows and data pipelines. Next, we present a semi-automated workflow that extends the building reconstruction pipeline to include procedural context generation using Python and Unreal Engine. Finally, we propose a workflow for integrating various types of large-scale urban analysis data for visualization. We conclude with a series of challenges faced in achieving such pipelines and the limitations of the current approach. However, the steps for a complete, end-to-end solution involve further developing robust systems for building detection, rooftop recognition, and geometry generation and importing and visualizing data in the same 3D environment, highlighting a need for further research and development in this field.
  •  
17.
  • Rumman, Nadine Abu, et al. (författare)
  • Skin deformation methods for interactive character animation
  • 2017
  • Ingår i: Communications in Computer and Information Science. - Cham : Springer International Publishing. - 1865-0937 .- 1865-0929. ; 693, s. 153-174, s. 153-174
  • Konferensbidrag (refereegranskat)abstract
    • Character animation is a vital component of contemporary computer games, animated feature films and virtual reality applications. The problem of creating appealing character animation can best be described by the title of the animation bible: “The Illusion of Life”. The focus is not on completing a given motion task, but more importantly on how this motion task is performed by the character. This does not necessarily require realistic behavior, but behavior that is believable. This of course includes the skin deformations when the character is moving. In this paper, we focus on the existing research in the area of skin deformation, ranging from skeleton-based deformation and volume preserving techniques to physically based skinning methods. We also summarize the recent contributions in deformable and soft body simulations for articulated characters, and discuss various geometric and example-based approaches. © Springer International Publishing AG 2017.
  •  
18.
  • Frid, Emma, 1988-, et al. (författare)
  • Perceptual Evaluation of Blended Sonification of Mechanical Robot Sounds Produced by Emotionally Expressive Gestures : Augmenting Consequential Sounds to Improve Non-verbal Robot Communication
  • 2021
  • Ingår i: International Journal of Social Robotics. - : Springer Nature. - 1875-4791 .- 1875-4805.
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper presents two experiments focusing on perception of mechanical sounds produced by expressive robot movement and blended sonifications thereof. In the first experiment, 31 participants evaluated emotions conveyed by robot sounds through free-form text descriptions. The sounds were inherently produced by the movements of a NAO robot and were not specifically designed for communicative purposes. Results suggested no strong coupling between the emotional expression of gestures and how sounds inherent to these movements were perceived by listeners; joyful gestures did not necessarily result in joyful sounds. A word that reoccurred in text descriptions of all sounds, regardless of the nature of the expressive gesture, was “stress”. In the second experiment, blended sonification was used to enhance and further clarify the emotional expression of the robot sounds evaluated in the first experiment. Analysis of quantitative ratings of 30 participants revealed that the blended sonification successfully contributed to enhancement of the emotional message for sound models designed to convey frustration and joy. Our findings suggest that blended sonification guided by perceptual research on emotion in speech and music can successfully improve communication of emotions through robot sounds in auditory-only conditions.
  •  
19.
  • Dombrowski, Ann Kathrin, et al. (författare)
  • Diffeomorphic Counterfactuals with Generative Models
  • 2024
  • Ingår i: IEEE Transactions on Pattern Analysis and Machine Intelligence. - 1939-3539 .- 0162-8828. ; 46:5, s. 3257-3274
  • Tidskriftsartikel (refereegranskat)abstract
    • Counterfactuals can explain classification decisions of neural networks in a human interpretable way. We propose a simple but effective method to generate such counterfactuals. More specifically, we perform a suitable diffeomorphic coordinate transformation and then perform gradient ascent in these coordinates to find counterfactuals which are classified with great confidence as a specified target class. We propose two methods to leverage generative models to construct such suitable coordinate systems that are either exactly or approximately diffeomorphic. We analyze the generation process theoretically using Riemannian differential geometry and validate the quality of the generated counterfactuals using various qualitative and quantitative measures.
  •  
20.
  • Menghi, Claudio, 1987, et al. (författare)
  • Poster: Property specification patterns for robotic missions
  • 2018
  • Ingår i: Proceedings - International Conference on Software Engineering. - New York, NY, USA : ACM. - 0270-5257. ; Part F137351, s. 434-435
  • Konferensbidrag (refereegranskat)abstract
    • Engineering dependable software for mobile robots is becoming increasingly important. A core asset in engineering mobile robots is the mission specification-A formal description of the goals that mobile robots shall achieve. Such mission specifications are used, among others, to synthesize, verify, simulate, or guide the engineering of robot software. Development of precise mission specifications is challenging. Engineers need to translate the mission requirements into specification structures expressed in a logical language-A laborious and error-prone task. To mitigate this problem, we present a catalog of mission specification patterns for mobile robots. Our focus is on robot movement, one of the most prominent and recurrent specification problems for mobile robots. Our catalog maps common mission specification problems to recurrent solutions, which we provide as templates that can be used by engineers. The patterns are the result of analyzing missions extracted from the literature. For each pattern, we describe usage intent, known uses, relationships to other patterns, and-most importantly-A template representing the solution as a logical formula in temporal logic. Our specification patterns constitute reusable building blocks that can be used by engineers to create complex mission specifications while reducing specification mistakes. We believe that our patterns support researchers working on tool support and techniques to synthesize and verify mission specifications, and language designers creating rich domain-specific languages for mobile robots, incorporating our patterns as language concepts.
  •  
21.
  • Lv, Zhihan, Dr. 1984-, et al. (författare)
  • Deep Learning for Security in Digital Twins of Cooperative Intelligent Transportation Systems
  • 2022
  • Ingår i: IEEE transactions on intelligent transportation systems (Print). - : Institute of Electrical and Electronics Engineers (IEEE). - 1524-9050 .- 1558-0016. ; 23:9, s. 16666-16675
  • Tidskriftsartikel (refereegranskat)abstract
    • The purpose is to solve the security problems of the Cooperative Intelligent Transportation System (CITS) Digital Twins (DTs) in the Deep Learning (DL) environment. The DL algorithm is improved; the Convolutional Neural Network (CNN) is combined with Support Vector Regression (SVR); the DTs technology is introduced. Eventually, a CITS DTs model is constructed based on CNN-SVR, whose security performance and effect are analyzed through simulation experiments. Compared with other algorithms, the security prediction accuracy of the proposed algorithm reaches 90.43%. Besides, the proposed algorithm outperforms other algorithms regarding Precision, Recall, and F1. The data transmission performances of the proposed algorithm and other algorithms are compared. The proposed algorithm can ensure that emergency messages can be responded to in time, with a delay of less than 1.8s. Meanwhile, it can better adapt to the road environment, maintain high data transmission speed, and provide reasonable path planning for vehicles so that vehicles can reach their destinations faster. The impacts of different factors on the transportation network are analyzed further. Results suggest that under path guidance, as the Market Penetration Rate (MPR), Following Rate (FR), and Congestion Level (CL) increase, the guidance strategy's effects become more apparent. When MPR ranges between 40% similar to 80% and the congestion is level III, the ATT decreases the fastest, and the improvement effect of the guidance strategy is more apparent. The proposed DL algorithm model can lower the data transmission delay of the system, increase the prediction accuracy, and reasonably changes the paths to suppress the sprawl of traffic congestions, providing an experimental reference for developing and improving urban transportation.
  •  
22.
  • Nguyen, Björnborg, 1992, et al. (författare)
  • Systematic benchmarking for reproducibility of computer vision algorithms for real-time systems: The example of optic flow estimation
  • 2019
  • Ingår i: IEEE International Conference on Intelligent Robots and Systems. - : IEEE. - 2153-0858 .- 2153-0866. ; , s. 5264-5269
  • Konferensbidrag (refereegranskat)abstract
    • Until now there have been few formalized methods for conducting systematic benchmarking aiming at reproducible results when it comes to computer vision algorithms. This is evident from lists of algorithms submitted to prominent datasets, authors of a novel method in many cases primarily state the performance of their algorithms in relation to a shallow description of the hardware system where it was evaluated. There are significant problems linked to this non-systematic approach of reporting performance, especially when comparing different approaches and when it comes to the reproducibility of claimed results. Furthermore how to conduct retrospective performance analysis such as an algorithm's suitability for embedded real-time systems over time with underlying hardware and software changes in place. This paper proposes and demonstrates a systematic way of addressing such challenges by adopting containerization of software aiming at formalization and reproducibility of benchmarks. Our results show maintainers of broadly accepted datasets in the computer vision community to strive for systematic comparison and reproducibility of submissions to increase the value and adoption of computer vision algorithms in the future.
  •  
23.
  • Frid, Emma, et al. (författare)
  • Perception of Mechanical Sounds Inherent to Expressive Gestures of a NAO Robot - Implications for Movement Sonification of Humanoids
  • 2018
  • Ingår i: Proceedings of the 15th Sound and Music Computing Conference. - Limassol, Cyprus. - 9789963697304
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we present a pilot study carried out within the project SONAO. The SONAO project aims to compen- sate for limitations in robot communicative channels with an increased clarity of Non-Verbal Communication (NVC) through expressive gestures and non-verbal sounds. More specifically, the purpose of the project is to use move- ment sonification of expressive robot gestures to improve Human-Robot Interaction (HRI). The pilot study described in this paper focuses on mechanical robot sounds, i.e. sounds that have not been specifically designed for HRI but are inherent to robot movement. Results indicated a low correspondence between perceptual ratings of mechanical robot sounds and emotions communicated through ges- tures. In general, the mechanical sounds themselves ap- peared not to carry much emotional information compared to video stimuli of expressive gestures. However, some mechanical sounds did communicate certain emotions, e.g. frustration. In general, the sounds appeared to commu- nicate arousal more effectively than valence. We discuss potential issues and possibilities for the sonification of ex- pressive robot gestures and the role of mechanical sounds in such a context. Emphasis is put on the need to mask or alter sounds inherent to robot movement, using for exam- ple blended sonification.
  •  
24.
  • Eriksson, Patric, et al. (författare)
  • A role for 'sensor simulation' and 'pre-emptive learning' in computer aided robotics
  • 1995
  • Ingår i: 26th International Symposium on Industrial Robots, Symposium Proceedings. - : Mechanical Engineering Publ.. - 1860580009 ; , s. 135-140
  • Konferensbidrag (refereegranskat)abstract
    • Sensor simulation in Computer Aided Robotics (CAR) can enhance the capabilities of such systems to enable off-line generation of programmes for sensor driven robots. However, such sensor simulation is not commonly supported in current computer aided robotic environments. A generic sensor object model for the simulation of sensors in graphical environments is described in this paper. Such a model can be used to simulate a variety of sensors, for example photoelectric, proximity and ultrasonic sensors. Tests results presented here show that this generic sensor model can be customised to emulate the characteristics of the real sensors. The preliminary findings from the first off-line trained mobile robot are presented. The results indicate that sensor simulation within CARs can be used to train robots to adapt to changing environments.
  •  
25.
  • Jacobsson, Martin, 1976-, et al. (författare)
  • A Drone-mounted Depth Camera-based Motion Capture System for Sports Performance Analysis
  • 2023
  • Ingår i: Artificial Intelligence in HCI. - : Springer Nature. - 9783031358937 ; , s. 489-503
  • Konferensbidrag (refereegranskat)abstract
    • Video is the most used tool for sport performance analysis as it provides a common reference point for the coach and the athlete. The problem with video is that it is a subjective tool. To overcome this, motion capture systems can used to get an objective 3D model of a per- son’s posture and motion, but only in laboratory settings. Unfortunately, many activities, such as most outdoor sports, cannot be captured in a lab without compromising the activity. In this paper, we propose to use an aerial drone system equipped with depth cameras, AI-based marker- less motion capture software to perform automatic skeleton tracking and real-time sports performance analysis of athletes. We experiment with off-the-shelf drone systems, miniaturized depth cameras, and commer- cially available skeleton tracking software to build a system for analyzing sports-related performance of athletes in their real settings. To make this a fully working system, we have conducted a few initial experiments and identified many issues that still needs to be addressed.
  •  
26.
  • Dobnik, Simon, 1977 (författare)
  • Coordinating spatial perspective in discourse
  • 2012
  • Ingår i: Proceedings of the Workshop on Vision and Language 2012 (VL'12): The 2nd Annual Meeting of the EPSRC Network on Vision and Language.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • We present results of an on-line data collection experiment where we investigate the assignment and coordination of spatial perspective between a pair of dialogue participants situated in a constrained virtual environment.
  •  
27.
  • Latupeirissa, Adrian Benigno, et al. (författare)
  • Exploring emotion perception in sonic HRI
  • 2020
  • Ingår i: 17th Sound and Music Computing Conference. - Torino : Zenodo. ; , s. 434-441
  • Konferensbidrag (refereegranskat)abstract
    • Despite the fact that sounds produced by robots can affect the interaction with humans, sound design is often an overlooked aspect in Human-Robot Interaction (HRI). This paper explores how different sets of sounds designed for expressive robot gestures of a humanoid Pepper robot can influence the perception of emotional intentions. In the pilot study presented in this paper, it has been asked to rate different stimuli in terms of perceived affective states. The stimuli were audio, audio-video and video only and contained either Pepper’s original servomotors noises, sawtooth, or more complex designed sounds. The preliminary results show a preference for the use of more complex sounds, thus confirming the necessity of further exploration in sonic HRI.
  •  
28.
  • Zhang, Chi, 1992, et al. (författare)
  • Cross or Wait? Predicting Pedestrian Interaction Outcomes at Unsignalized Crossings
  • 2023
  • Ingår i: IEEE Intelligent Vehicles Symposium, Proceedings. - Anchorage, Alaska, Canada, : IEEE. - 9798350346916 - 9798350346923
  • Konferensbidrag (refereegranskat)abstract
    • Predicting pedestrian behavior when interacting with vehicles is one of the most critical challenges in the field of automated driving. Pedestrian crossing behavior is influenced by various interaction factors, including time to arrival, pedestrian waiting time, the presence of zebra crossing, and the properties and personality traits of both pedestrians and drivers. However, these factors have not been fully explored for use in predicting interaction outcomes. In this paper, we use machine learning to predict pedestrian crossing behavior including pedestrian crossing decision, crossing initiation time (CIT), and crossing duration (CD) when interacting with vehicles at unsignalized crossings. Distributed simulator data are utilized for predicting and analyzing the interaction factors. Compared with the logistic regression baseline model, our proposed neural network model improves the prediction accuracy and F1 score by 4.46% and 3.23%, respectively. Our model also reduces the root mean squared error (RMSE) for CIT and CD by 21.56% and 30.14% compared with the linear regression model. Additionally, we have analyzed the importance of interaction factors, and present the results of models using fewer factors. This provides information for model selection in different scenarios with limited input features.
  •  
29.
  • Gu, Irene Yu-Hua, 1953, et al. (författare)
  • Grassmann Manifold Online Learning and Partial Occlusion Handling for Visual Object Tracking under Bayesian Formulation
  • 2012
  • Ingår i: Proceedings - International Conference on Pattern Recognition. - 1051-4651. - 9784990644109 ; , s. 1463-1466
  • Konferensbidrag (refereegranskat)abstract
    • This paper addresses issues of online learning and occlusion handling in video object tracking. Although manifold tracking is promising, large pose changes and long term partial occlusions of video objects remain challenging.We propose a novel manifold tracking scheme that tackles such problems, with the following main novelties: (a) Online estimation of object appearances on Grassmann manifolds; (b) Optimal criterion-based occlusion handling during online learning; (c) Nonlinear dynamic model for appearance basis matrix and its velocity; (b) Bayesian formulations separately for the tracking and the online learning process. Two particle filters are employed: one is on the manifold for generating appearance particles and another on the linear space for generating affine box particles. Tracking and online updating are performed in alternative fashion to mitigate the tracking drift. Experiments on videos have shown robust tracking performance especially when objects contain significantpose changes accompanied with long-term partial occlusions. Evaluations and comparisons with two existing methods provide further support to the proposed method.
  •  
30.
  • Koriakina, Nadezhda, 1991-, et al. (författare)
  • Deep multiple instance learning versus conventional deep single instance learning for interpretable oral cancer detection
  • 2024
  • Ingår i: PLOS ONE. - : Public Library of Science (PLoS). - 1932-6203. ; 19:4 April
  • Tidskriftsartikel (refereegranskat)abstract
    • The current medical standard for setting an oral cancer (OC) diagnosis is histological examination of a tissue sample taken from the oral cavity. This process is time-consuming and more invasive than an alternative approach of acquiring a brush sample followed by cytological analysis. Using a microscope, skilled cytotechnologists are able to detect changes due to malignancy; however, introducing this approach into clinical routine is associated with challenges such as a lack of resources and experts. To design a trustworthy OC detection system that can assist cytotechnologists, we are interested in deep learning based methods that can reliably detect cancer, given only per-patient labels (thereby minimizing annotation bias), and also provide information regarding which cells are most relevant for the diagnosis (thereby enabling supervision and understanding). In this study, we perform a comparison of two approaches suitable for OC detection and interpretation: (i) conventional single instance learning (SIL) approach and (ii) a modern multiple instance learning (MIL) method. To facilitate systematic evaluation of the considered approaches, we, in addition to a real OC dataset with patient-level ground truth annotations, also introduce a synthetic dataset—PAP-QMNIST. This dataset shares several properties of OC data, such as image size and large and varied number of instances per bag, and may therefore act as a proxy model of a real OC dataset, while, in contrast to OC data, it offers reliable per-instance ground truth, as defined by design. PAP-QMNIST has the additional advantage of being visually interpretable for non-experts, which simplifies analysis of the behavior of methods. For both OC and PAP-QMNIST data, we evaluate performance of the methods utilizing three different neural network architectures. Our study indicates, somewhat surprisingly, that on both synthetic and real data, the performance of the SIL approach is better or equal to the performance of the MIL approach. Visual examination by cytotechnologist indicates that the methods manage to identify cells which deviate from normality, including malignant cells as well as those suspicious for dysplasia. We share the code as open source.
  •  
31.
  • Kim, Jinhan, et al. (författare)
  • Guiding Deep Learning System Testing Using Surprise Adequacy
  • 2019
  • Ingår i: Proceedings - International Conference on Software Engineering. - : IEEE. - 0270-5257. ; 2019-May, s. 1039-1049, s. 1039-1049
  • Konferensbidrag (refereegranskat)abstract
    • Deep Learning (DL) systems are rapidly being adopted in safety and security critical domains, urgently calling for ways to test their correctness and robustness. Testing of DL systems has traditionally relied on manual collection and labelling of data. Recently, a number of coverage criteria based on neuron activation values have been proposed. These criteria essentially count the number of neurons whose activation during the execution of a DL system satisfied certain properties, such as being above predefined thresholds. However, existing coverage criteria are not sufficiently fine grained to capture subtle behaviours exhibited by DL systems. Moreover, evaluations have focused on showing correlation between adversarial examples and proposed criteria rather than evaluating and guiding their use for actual testing of DL systems. We propose a novel test adequacy criterion for testing of DL systems, called Surprise Adequacy for Deep Learning Systems (SADL), which is based on the behaviour of DL systems with respect to their training data. We measure the surprise of an input as the difference in DL system's behaviour between the input and the training data (i.e., what was learnt during training), and subsequently develop this as an adequacy criterion: a good test input should be sufficiently but not overtly surprising compared to training data. Empirical evaluation using a range of DL systems from simple image classifiers to autonomous driving car platforms shows that systematic sampling of inputs based on their surprise can improve classification accuracy of DL systems against adversarial examples by up to 77.5% via retraining.
  •  
32.
  • Daoud, Adel, 1981, et al. (författare)
  • Using Satellite Images and Deep Learning to Measure Health and Living Standards in India
  • 2023
  • Ingår i: Social Indicators Research. - : SPRINGER. - 0303-8300 .- 1573-0921. ; 167:1-3, s. 475-505
  • Tidskriftsartikel (refereegranskat)abstract
    • Using deep learning with satellite images enhances our understanding of human development at a granular spatial and temporal level. Most studies have focused on Africa and on a narrow set of asset-based indicators. This article leverages georeferenced village-level census data from across 40% of the population of India to train deep models that predicts 16 indicators of human well-being from Landsat 7 imagery. Based on the principles of transfer learning, the census-based model is used as a feature extractor to train another model that predicts an even larger set of developmental variables—over 90 variables—included in two rounds of the National Family Health Survey (NFHS). The census-based-feature-extractor model outperforms the current standard in the literature for most of these NFHS variables. Overall, the results show that combining satellite data with Indian Census data unlocks rich information for training deep models that track human development at an unprecedented geographical and temporal resolution.
  •  
33.
  • ur Réhman, Shafiq, 1978- (författare)
  • Expressing emotions through vibration for perception and control
  • 2010
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis addresses a challenging problem: “how to let the visually impaired ‘see’ others emotions”. We, human beings, are heavily dependent on facial expressions to express ourselves. A smile shows that the person you are talking to is pleased, amused, relieved etc. People use emotional information from facial expressions to switch between conversation topics and to determine attitudes of individuals. Missing emotional information from facial expressions and head gestures makes the visually impaired extremely difficult to interact with others in social events. To enhance the visually impaired’s social interactive ability, in this thesis we have been working on the scientific topic of ‘expressing human emotions through vibrotactile patterns’. It is quite challenging to deliver human emotions through touch since our touch channel is very limited. We first investigated how to render emotions through a vibrator. We developed a real time “lipless” tracking system to extract dynamic emotions from the mouth and employed mobile phones as a platform for the visually impaired to perceive primary emotion types. Later on, we extended the system to render more general dynamic media signals: for example, render live football games through vibration in the mobile for improving mobile user communication and entertainment experience. To display more natural emotions (i.e. emotion type plus emotion intensity), we developed the technology to enable the visually impaired to directly interpret human emotions. This was achieved by use of machine vision techniques and vibrotactile display. The display is comprised of a ‘vibration actuators matrix’ mounted on the back of a chair and the actuators are sequentially activated to provide dynamic emotional information. The research focus has been on finding a global, analytical, and semantic representation for facial expressions to replace state of the art facial action coding systems (FACS) approach. We proposed to use the manifold of facial expressions to characterize dynamic emotions. The basic emotional expressions with increasing intensity become curves on the manifold extended from the center. The blends of emotions lie between those curves, which could be defined analytically by the positions of the main curves. The manifold is the “Braille Code” of emotions. The developed methodology and technology has been extended for building assistive wheelchair systems to aid a specific group of disabled people, cerebral palsy or stroke patients (i.e. lacking fine motor control skills), who don’t have ability to access and control the wheelchair with conventional means, such as joystick or chin stick. The solution is to extract the manifold of the head or the tongue gestures for controlling the wheelchair. The manifold is rendered by a 2D vibration array to provide user of the wheelchair with action information from gestures and system status information, which is very important in enhancing usability of such an assistive system. Current research work not only provides a foundation stone for vibrotactile rendering system based on object localization but also a concrete step to a new dimension of human-machine interaction.
  •  
34.
  • Liu, Yang, et al. (författare)
  • Movement Status Based Vision Filter for RoboCup Small-Size League
  • 2012
  • Ingår i: Advances in Automation and Robotics, Vol. 2. - Berlin, Heidelberg : Springer. - 9783642256455 - 9783642256462 ; , s. 79-86
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • Small-size soccer league is a division of the RoboCup (Robot world cup) competitions. Each team uses its own designed hardware and software to compete with othersunder defined rules. There are two kinds of data which the strategy system will receive from the dedicated server, one of them is the referee commands, and the other one is vision data. However, due to the network delay and the vision noise, we have to process the data before we can actually use it. Therefore, a certain mechanism is needed in this case.Instead of using some prevalent and complex algorithms, this paper proposes to solve this problem from simple kinematics and mathematics point of view, which can be implemented effectively by hobbyists and undergraduate students. We divide this problem by the speed status and deal it in three different situations. Testing results show good performance with this algorithm and great potential in filtering vision data thus forecasting actual coordinates of tracking objects.
  •  
35.
  • Mitsioni, Ioanna, 1991-, et al. (författare)
  • Interpretability in Contact-Rich Manipulation via Kinodynamic Images
  • 2021
  • Ingår i: Proceedings - IEEE International Conference on Robotics and Automation. - : Institute of Electrical and Electronics Engineers (IEEE). - 1050-4729. ; 2021-May, s. 10175-10181
  • Konferensbidrag (refereegranskat)abstract
    • Deep Neural Networks (NNs) have been widely utilized in contact-rich manipulation tasks to model the complicated contact dynamics. However, NN-based models are often difficult to decipher which can lead to seemingly inexplicable behaviors and unidentifiable failure cases. In this work, we address the interpretability of NN-based models by introducing the kinodynamic images. We propose a methodology that creates images from kinematic and dynamic data of contact-rich manipulation tasks. By using images as the state representation, we enable the application of interpretability modules that were previously limited to vision-based tasks. We use this representation to train a Convolutional Neural Network (CNN) and we extract interpretations with Grad-CAM to produce visual explanations. Our method is versatile and can be applied to any classification problem in manipulation tasks to visually interpret which parts of the input drive the model's decisions and distinguish its failure modes, regardless of the features used. Our experiments demonstrate that our method enables detailed visual inspections of sequences in a task, and high-level evaluations of a model's behavior.
  •  
36.
  • Svärd, Malin, 1985 (författare)
  • Computational driver behavior models for vehicle safety applications
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The aim of this thesis is to investigate how human driving behaviors can be formally described in mathematical models intended for online personalization of advanced driver assistance systems (ADAS) or offline virtual safety evaluations. Both longitudinal (braking) and lateral (steering) behaviors in routine driving and emergencies are addressed. Special attention is paid to driver glance behavior in critical situations and the role of peripheral vision. First, a hybrid framework based on autoregressive models with exogenous input (ARX-models) is employed to predict and classify driver control in real time. Two models are suggested, one targeting steering behavior and the other longitudinal control behavior. Although the predictive performance is unsatisfactory, both models can distinguish between different driving styles. Moreover, a basic model for drivers' brake initiation and modulation in critical longitudinal situations (specifically for rear-end conflicts) is constructed. The model is based on a conceptual framework of noisy evidence accumulation and predictive processing. Several model extensions related to gaze behavior are also proposed and successfully fitted to real-world crashes and near-crashes. The influence of gaze direction is further explored in a driving simulator study, showing glance response times to be independent of the glance's visual eccentricity, while brake response times increase for larger gaze angles, as does the rate of missed target detections. Finally, the potential of a set of metrics to quantify subjectively perceived risk in lane departure situations to explain drivers' recovery steering maneuvers was investigated. The most influential factors were the relative yaw angle and splay angle error at steering initiation. Surprisingly, it was observed that drivers often initiated the recovery steering maneuver while looking off-road. To sum up, the proposed models in this thesis facilitate the development of personalized ADASs and contribute to trustworthy virtual evaluations of current, future, and conceptual safety systems. The insights and ideas contribute to an enhanced, human-centric system development, verification, and validation process. In the long term, this will likely lead to improved vehicle safety and a reduced number of severe injuries and fatalities in traffic.
  •  
37.
  • Zhang, Chi, et al. (författare)
  • Social-IWSTCNN: A social interaction-weighted spatio- temporal convolutional neural network for pedestrian trajectory prediction in urban traffic scenarios
  • 2021
  • Ingår i: IEEE Intelligent Vehicles Symposium, 11-17 July 2021, Proceedings. - : IEEE. - 9781728153957 ; 2021-July, s. 1515-1522
  • Konferensbidrag (refereegranskat)abstract
    • Pedestrian trajectory prediction in urban scenarios is essential for automated driving. This task is challenging because the behavior of pedestrians is influenced by both their own history paths and the interactions with others. Previous research modeled these interactions with pooling mechanisms or aggregating with hand-crafted attention weights. In this paper, we present the Social Interaction-Weighted Spatio- Temporal Convolutional Neural Network (Social-IWSTCNN), which includes both the spatial and the temporal features. We propose a novel design, namely the Social Interaction Extractor, to learn the spatial and social interaction features of pedestrians. Most previous works used ETH and UCY datasets which include five scenes but do not cover urban traffic scenarios extensively for training and evaluation. In this paper, we use the recently released large-scale Waymo Open Dataset in urban traffic scenarios, which includes 374 urban training scenes and 76 urban testing scenes to analyze the performance of our proposed algorithm in comparison to the state-of-the-art (SOTA) models. The results show that our algorithm outperforms SOTA algorithms such as Social-LSTM, Social-GAN, and Social-STGCNN on both Average Displacement Error (ADE) and Final Displacement Error (FDE). Furthermore, our Social- IWSTCNN is 54.8 times faster in data pre-processing speed, and 4.7 times faster in total test speed than the current best SOTA algorithm Social-STGCNN.
  •  
38.
  • Lindén, Joakim, et al. (författare)
  • Evaluating the Robustness of ML Models to Out-of-Distribution Data Through Similarity Analysis
  • 2023
  • Ingår i: Commun. Comput. Info. Sci.. - : Springer Science and Business Media Deutschland GmbH. - 9783031429408 ; , s. 348-359, s. 348-359
  • Konferensbidrag (refereegranskat)abstract
    • In Machine Learning systems, several factors impact the performance of a trained model. The most important ones include model architecture, the amount of training time, the dataset size and diversity. We present a method for analyzing datasets from a use-case scenario perspective, detecting and quantifying out-of-distribution (OOD) data on dataset level. Our main contribution is the novel use of similarity metrics for the evaluation of the robustness of a model by introducing relative Fréchet Inception Distance (FID) and relative Kernel Inception Distance (KID) measures. These relative measures are relative to a baseline in-distribution dataset and are used to estimate how the model will perform on OOD data (i.e. estimate the model accuracy drop). We find a correlation between our proposed relative FID/relative KID measure and the drop in Average Precision (AP) accuracy on unseen data.
  •  
39.
  • Balouji, Ebrahim, 1985, et al. (författare)
  • A LSTM-based Deep Learning Method with Application to Voltage Dip Classification
  • 2018
  • Ingår i: 2018 18TH INTERNATIONAL CONFERENCE ON HARMONICS AND QUALITY OF POWER (ICHQP). - Piscataway, NJ : Institute of Electrical and Electronics Engineers (IEEE). - 2164-0610. - 9781538605172 - 9781538605172 ; 2018-May
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, a deep learning (DL)-based method for automatic feature extraction and classification of voltage dips is proposed. The method consists of a dedicated architecture of Long Short-Term Memory (LSTM), which is a special type of Recurrent Neural Networks (RNNs). A total of 5982 three-phase one-cycle voltage dip RMS sequences, measured from several countries, has been used in our experiments. Our results have shown that the proposedmethod is able to classify the voltage dips from learned features in LSTM, with 93.40% classification accuracy on the test data set. The developed architecture is shown to be novel for feature learning and classification of voltage dips. Different from the conventional machine learning methods, the proposed method is able to learn dip features without requiring transition-event segmentation, selecting thresholds, and using expert rules or human expert knowledge, when a large amount of measurement data is available. This opens a new possibility of exploiting deep learning technology for power quality data analytics and classification.
  •  
40.
  • Magnusson, Andreas (författare)
  • Evolutionary optimisation of a morphological image processor for embedded systems
  • 2008
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The work presented in this thesis concerns the design, development and implementation of two digital components to be used, primarily, in autonomously operating embedded systems, such as mobile robots. The first component is an image coprocessor, for high-speed morphological image processing, and the second is a hardware-based genetic algorithm coprocessor, which provides evolutionary computation functionality for embedded applications. The morphological image coprocessor, the Clutter-II, has been optimised for efficiency of implementation, processing speed and system integration. The architecture employs a compact hardware structure for its implementation of the morphological neighbourhood transformations. The compact structure realises a significantly reduced hardware resource cost. The resources saved by the compact structure can be used to increase parallelism in image processing operations, thereby improving processing speed in a similarly significant manner. The design of the Clutter-II as a coprocessor enables easy-to-use and efficient access to its image processing capabilities from the host system processor and application software. High-speed input-output interfaces, with separated instruction and data buses, provide effective communication with system components external to the Clutter-II. A substantial part of the work presented in this thesis concerns the practical implementation of morphological filters for the Clutter-II, using the compact transformation structure. To derive efficient filter implementations, a genetic algorithm has been developed. The algorithm optimises the filter implementation by minimising the number of operations required for a particular filter. The experience gained from the work on the genetic algorithm inspired the development of the second component, the HERPUC. HERPUC is a hardware-based genetic algorithm processor, which employs a novel hardware implementation of the selection mechanism of the algorithm. This, in combination with a flexible form of recombination operator, has made the HERPUC an efficient hardware implementation of a genetic algorithm. Results indicate that the HERPUC is able to solve the set of test problems, to which it has been applied, using fewer fitness evaluations and a smaller population size, than previous hardware-based genetic algorithm implementations.
  •  
41.
  • Lindgren, Helena, Professor, et al. (författare)
  • The wasp-ed AI curriculum : A holistic curriculum for artificial intelligence
  • 2023
  • Ingår i: INTED2023 Proceedings. - : IATED. - 9788409490264 ; , s. 6496-6502
  • Konferensbidrag (refereegranskat)abstract
    • Efforts in lifelong learning and competence development in Artificial Intelligence (AI) have been on the rise for several years. These initiatives have mostly been applied to Science, Technology, Engineering and Mathematics (STEM) disciplines. Even though there has been significant development in Digital Humanities to incorporate AI methods and tools in higher education, the potential for such competences in Arts, Humanities and Social Sciences is far from being realised. Furthermore, there is an increasing awareness that the STEM disciplines need to include competences relating to AI in humanity and society. This is especially important considering the widening and deepening of the impact of AI on society at large and individuals. The aim of the presented work is to provide a broad and inclusive AI Curriculum that covers the breadth of the topic as it is seen today, which is significantly different from only a decade ago. It is important to note that with the curriculum we mean an overview of the subject itself, rather than a particular education program. The curriculum is intended to be used as a foundation for educational activities in AI to for example harmonize terminology, compare different programs, and identify educational gaps to be filled. An important aspect of the curriculum is the ethical, legal, and societal aspects of AI and to not limit the curriculum to the STEM subjects, instead extending to a holistic, human-centred AI perspective. The curriculum is developed as part of the national research program WASP-ED, the Wallenberg AI and transformative technologies education development program. 
  •  
42.
  • Ge, Chenjie, 1991, et al. (författare)
  • 3D Multi-Scale Convolutional Networks for Glioma Grading Using MR Images
  • 2018
  • Ingår i: Proceedings - International Conference on Image Processing, ICIP. - 1522-4880. - 9781479970612 ; , s. 141-145
  • Konferensbidrag (refereegranskat)abstract
    • This paper addresses issues of grading brain tumor, glioma, from Magnetic Resonance Images (MRIs). Although feature pyramid is shown to be useful to extract multi-scale features for object recognition, it is rarely explored in MRI images for glioma classification/grading. For glioma grading, existing deep learning methods often use convolutional neural networks (CNNs) to extract single-scale features without considering that the scales of brain tumor features vary depending on structure/shape, size, tissue smoothness, and locations. In this paper, we propose to incorporate the multi-scale feature learning into a deep convolutional network architecture, which extracts multi-scale semantic as well as fine features for glioma tumor grading. The main contributions of the paper are: (a) propose a novel 3D multi-scale convolutional network architecture for the dedicated task of glioma grading; (b) propose a novel feature fusion scheme that further refines multi-scale features generated from multi-scale convolutional layers; (c) propose a saliency-aware strategy to enhance tumor regions of MRIs. Experiments were conducted on an open dataset for classifying high/low grade gliomas. Performance on the test set using the proposed scheme has shown good results (with accuracy of 89.47%).
  •  
43.
  • Simistira Liwicki, Foteini, et al. (författare)
  • Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition
  • 2023
  • Ingår i: Scientific Data. - : Springer Nature. - 2052-4463. ; 10
  • Tidskriftsartikel (refereegranskat)abstract
    • The recognition of inner speech, which could give a ‘voice’ to patients that have no ability to speak or move, is a challenge for brain-computer interfaces (BCIs). A shortcoming of the available datasets is that they do not combine modalities to increase the performance of inner speech recognition. Multimodal datasets of brain data enable the fusion of neuroimaging modalities with complimentary properties, such as the high spatial resolution of functional magnetic resonance imaging (fMRI) and the temporal resolution of electroencephalography (EEG), and therefore are promising for decoding inner speech. This paper presents the first publicly available bimodal dataset containing EEG and fMRI data acquired nonsimultaneously during inner-speech production. Data were obtained from four healthy, right-handed participants during an inner-speech task with words in either a social or numerical category. Each of the 8-word stimuli were assessed with 40 trials, resulting in 320 trials in each modality for each participant. The aim of this work is to provide a publicly available bimodal dataset on inner speech, contributing towards speech prostheses.
  •  
44.
  • Granlund, Gösta H. (författare)
  • A Nonlinear, Image-content Dependent Measure of Image Quality
  • 1977
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • In recent years, considerable research effort has been devoted to the development of useful descriptors for image quality. The attempts have been hampered by i n complete understanding of the operation of the human visual system. This has made it difficult to relate physical measures and perceptual traits.A new model for determination of image quality is proposed. Its main feature is that it tries to invoke image content into consideration. The model builds upon a theory of image linearization, which means that the information in an image can well enough be represented using linear segments or structures within local spatial regions and frequency ranges. This implies a l so a suggestion that information in an image has to do with one- dimensional correlations. This gives a possibility to separate image content from noise in images, and measure them both.Also a hypothesis is proposed that the visual system of humans does in fact perform such a linearization.
  •  
45.
  • Rødseth, Ørnulf Jan, et al. (författare)
  • Communication Architecture for an Unmanned Merchant Ship
  • 2013
  • Ingår i: OCEANS - Bergen, 2013 MTS/IEEE. - 9781479900008 ; :2013, s. 1-9
  • Konferensbidrag (refereegranskat)abstract
    • Unmanned ships is an interesting proposal toimplement slow steaming and saving fuel while avoiding that the crew has to stay on board for very long deep-sea passages. To maintain efficiency and safety, autonomy has to be implemented to enable the ship to operate without requiring the SCC to continuously control the ship. Communication between ship and a shore control center (SCC) is therefore critical for the unmanned ship and proper safety and security precautions are required, including sufficient redundancy and backup solutions.Communication systems should be able to supply at least 4Megabits/second for full remote control from SCC, but reduced operation can be maintained at down to 125 kilobits/second. In autonomous mode, the required communication bandwidth will be very low. For an autonomous ship the higher bandwidth requirements are from ship to shore which is the opposite of the situation for normal ships. Cost and availability of communication is an issue. The use of technical and functionalindexes will enable the SCC to monitor the status of the ship at minimum load on operators and on the communication systems. The security and integrity of information transfers is crucial and appropriate means must be taken to ensure failure tolerance and fail to safe properties of the system.
  •  
46.
  • Suchan, Jakob, et al. (författare)
  • Commonsense Visual Sensemaking for Autonomous Driving : On Generalised Neurosymbolic Online Abduction Integrating Vision and Semantics
  • 2021
  • Ingår i: Artificial Intelligence. - : Elsevier. - 0004-3702 .- 1872-7921. ; 299
  • Tidskriftsartikel (refereegranskat)abstract
    • We demonstrate the need and potential of systematically integrated vision and semantics solutions for visual sensemaking in the backdrop of autonomous driving. A general neurosymbolic method for online visual sensemaking using answer set programming (ASP) is systematically formalised and fully implemented. The method integrates state of the art in visual computing, and is developed as a modular framework that is generally usable within hybrid architectures for realtime perception and control. We evaluate and demonstrate with community established benchmarks KITTIMOD, MOT-2017, and MOT-2020. As use-case, we focus on the significance of human-centred visual sensemaking —e.g., involving semantic representation and explainability, question-answering, commonsense interpolation— in safety-critical autonomous driving situations. The developed neurosymbolic framework is domain-independent, with the case of autonomous driving designed to serve as an exemplar for online visual sensemaking in diverse cognitive interaction settings in the backdrop of select human-centred AI technology design considerations.
  •  
47.
  • Haage, Mathias, et al. (författare)
  • Teaching Assembly by Demonstration using Advanced Human Robot Interaction and a Knowledge Integration Framework
  • 2017
  • Ingår i: Procedia Manufacturing. - : Elsevier BV. - 2351-9789. ; 11, s. 164-173
  • Tidskriftsartikel (refereegranskat)abstract
    • Conventional industrial robots are heavily dependent on hard automation that requires pre-specified fixtures and time-consuming (re)programming performed by experienced operators. In this work, teaching by human-only demonstration is used for reducing required time and expertise to setup a robotized assembly station. This is achieved by the proposed framework enhancing the robotic system with advanced perception and cognitive abilities, accessed through a user-friendly Human Robot Interaction interface. The approach is evaluated on a small parts’ assembly use case deployed onto a collaborative industrial robot testbed. Experiments indicate that the proposed approach allows inexperienced users to efficiently teach robots new assembly tasks.
  •  
48.
  • Orthmann, Bastian, et al. (författare)
  • Sounding Robots: Design and Evaluation of Auditory Displays for Unintentional Human-robot Interaction
  • 2023
  • Ingår i: ACM Transactions on Human-Robot Interaction. - : Association for Computing Machinery (ACM). - 2573-9522. ; 12:4
  • Tidskriftsartikel (refereegranskat)abstract
    • Non-verbal communication is important in HRI, particularly when humans and robots do not need to actively engage in a task together, but rather they co-exist in a shared space. Robots might still need to communicate states such as urgency or availability, and where they intend to go, to avoid collisions and disruptions. Sounds could be used to communicate such states and intentions in an intuitive and non-disruptive way. Here, we propose a multi-layer classification system for displaying various robot information simultaneously via sound. We first conceptualise which robot features could be displayed (robot size, speed, availability for interaction, urgency, and directionality); we then map them to a set of audio parameters. The designed sounds were then evaluated in five online studies, where people listened to the sounds and were asked to identify the associated robot features. The sounds were generally understood as intended by participants, especially when they were evaluated one feature at a time, and partially when they were evaluated two features simultaneously. The results of these evaluations suggest that sounds can be successfully used to communicate robot states and intended actions implicitly and intuitively.
  •  
49.
  • Barreiro, Anabela, et al. (författare)
  • Multi3Generation : Multitask, Multilingual, Multimodal Language Generation
  • 2022
  • Ingår i: Proceedings of the 23rd Annual Conference of the European Association for Machine Translation. - : European Association for Machine Translation. ; , s. 345-346
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents the Multitask, Multilingual, Multimodal Language Generation COST Action – Multi3Generatio(CA18231), an interdisciplinary networof research groups working on different aspects of language generation. This "meta-paper" will serve as reference for citationof the Action in future publications. It presents the objectives, challenges and a the links for the achieved outcomes.
  •  
50.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 3964
Typ av publikation
konferensbidrag (2141)
tidskriftsartikel (1202)
doktorsavhandling (174)
bokkapitel (143)
rapport (88)
licentiatavhandling (74)
visa fler...
annan publikation (66)
forskningsöversikt (29)
proceedings (redaktörskap) (15)
samlingsverk (redaktörskap) (13)
bok (10)
patent (6)
konstnärligt arbete (4)
recension (1)
visa färre...
Typ av innehåll
refereegranskat (3192)
övrigt vetenskapligt/konstnärligt (744)
populärvet., debatt m.m. (22)
Författare/redaktör
Lindeberg, Tony, 196 ... (114)
Balkenius, Christian (114)
Gu, Irene Yu-Hua, 19 ... (110)
Lindblad, Joakim (84)
Kragic, Danica (83)
Bengtsson, Ewert (71)
visa fler...
Sladoje, Nataša (65)
Felsberg, Michael (55)
Heyden, Anders (53)
Felsberg, Michael, 1 ... (53)
Åström, Karl (49)
Khan, Fahad (46)
Kahl, Fredrik, 1972 (46)
Johnsson, Magnus (46)
Kragic, Danica, 1971 ... (42)
Oskarsson, Magnus (41)
Khan, Fahad Shahbaz, ... (40)
Borgefors, Gunilla (37)
Johansson, Birger (36)
Sintorn, Ida-Maria (35)
Hotz, Ingrid (35)
Ek, Carl Henrik (34)
Ahlberg, Jörgen, 197 ... (34)
Khan, Salman (34)
Li, Haibo (33)
Nikolakopoulos, Geor ... (33)
Mehnert, Andrew, 196 ... (33)
Kahl, Fredrik (32)
Larsson, Viktor (32)
Brun, Anders (31)
Nyström, Ingela (30)
Sattler, Torsten, 19 ... (29)
Pollefeys, Marc (29)
Maki, Atsuto (29)
Bekiroglu, Yasemin, ... (29)
Nalpantidis, Lazaros (28)
Stork, Johannes Andr ... (27)
Pham, Tuan D. (27)
Yang, Jie (26)
Olsson, Carl (26)
Åström, Kalle (25)
Liwicki, Marcus (25)
Hast, Anders (25)
Svensson, Stina (25)
Gasteratos, Antonios (25)
Strand, Robin (24)
Karayiannidis, Yiann ... (24)
Lilienthal, Achim J. ... (23)
Wählby, Carolina (23)
Seipel, Stefan (23)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (832)
Chalmers tekniska högskola (769)
Uppsala universitet (678)
Linköpings universitet (570)
Lunds universitet (566)
Örebro universitet (222)
visa fler...
Umeå universitet (152)
Göteborgs universitet (140)
Luleå tekniska universitet (115)
Högskolan i Halmstad (107)
Sveriges Lantbruksuniversitet (100)
Högskolan i Skövde (51)
Högskolan i Gävle (41)
Blekinge Tekniska Högskola (39)
Mälardalens universitet (35)
RISE (35)
Mittuniversitetet (29)
Linnéuniversitetet (22)
Stockholms universitet (21)
Karolinska Institutet (21)
Jönköping University (16)
Högskolan Väst (12)
Malmö universitet (7)
Högskolan Dalarna (6)
Stockholms konstnärliga högskola (2)
Högskolan i Borås (1)
Karlstads universitet (1)
Försvarshögskolan (1)
VTI - Statens väg- och transportforskningsinstitut (1)
IVL Svenska Miljöinstitutet (1)
Röda Korsets Högskola (1)
visa färre...
Språk
Engelska (3919)
Svenska (40)
Franska (3)
Tyska (1)
Spanska (1)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (3963)
Teknik (975)
Medicin och hälsovetenskap (164)
Samhällsvetenskap (111)
Humaniora (89)
Lantbruksvetenskap (61)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy