SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "AMNE:(NATURVETENSKAP Data- och informationsvetenskap Datorseende och robotik) srt2:(2020-2024)"

Search: AMNE:(NATURVETENSKAP Data- och informationsvetenskap Datorseende och robotik) > (2020-2024)

  • Result 1-10 of 1287
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Blanch, Krister, 1991 (author)
  • Beyond-application datasets and automated fair benchmarking
  • 2023
  • Licentiate thesis (other academic/artistic)abstract
    • Beyond-application perception datasets are generalised datasets that emphasise the fundamental components of good machine perception data. When analysing the history of perception datatsets, notable trends suggest that design of the dataset typically aligns with an application goal. Instead of focusing on a specific application, beyond-application datasets instead look at capturing high-quality, high-volume data from a highly kinematic environment, for the purpose of aiding algorithm development and testing in general. Algorithm benchmarking is a cornerstone of autonomous systems development, and allows developers to demonstrate their results in a comparative manner. However, most benchmarking systems allow developers to use their own hardware or select favourable data. There is also little focus on run time performance and consistency, with benchmarking systems instead showcasing algorithm accuracy. By combining both beyond-application dataset generation and methods for fair benchmarking, there is also the dilemma of how to provide the dataset to developers for this benchmarking, as the result of a high-volume, high-quality dataset generation is a significant increase in dataset size when compared to traditional perception datasets. This thesis presents the first results of attempting the creation of such a dataset. The dataset was built using a maritime platform, selected due to the highly dynamic environment presented on water. The design and initial testing of this platform is detailed, as well as as methods of sensor validation. Continuing, the thesis then presents a method of fair benchmarking, by utilising remote containerisation in a way that allows developers to present their software to the dataset, instead of having to first locally store a copy. To test this dataset and automatic online benchmarking, a number of reference algorithms were required for initial results. Three algorithms were built, using the data from three different sensors captured on the maritime platform. Each algorithm calculates vessel odometry, and the automatic benchmarking system was utilised to show the accuracy and run-time performance of these algorithms. It was found that the containerised approach alleviated data management concerns, prevented inflated accuracy results, and demonstrated precisely how computationally intensive each algorithm was.
  •  
2.
  • Ali, Muhaddisa Barat, 1986 (author)
  • Deep Learning Methods for Classification of Gliomas and Their Molecular Subtypes, From Central Learning to Federated Learning
  • 2023
  • Doctoral thesis (other academic/artistic)abstract
    • The most common type of brain cancer in adults are gliomas. Under the updated 2016 World Health Organization (WHO) tumor classification in central nervous system (CNS), identification of molecular subtypes of gliomas is important. For low grade gliomas (LGGs), prediction of molecular subtypes by observing magnetic resonance imaging (MRI) scans might be difficult without taking biopsy. With the development of machine learning (ML) methods such as deep learning (DL), molecular based classification methods have shown promising results from MRI scans that may assist clinicians for prognosis and deciding on a treatment strategy. However, DL requires large amount of training datasets with tumor class labels and tumor boundary annotations. Manual annotation of tumor boundary is a time consuming and expensive process. The thesis is based on the work developed in five papers on gliomas and their molecular subtypes. We propose novel methods that provide improved performance.  The proposed methods consist of a multi-stream convolutional autoencoder (CAE)-based classifier, a deep convolutional generative adversarial network (DCGAN) to enlarge the training dataset, a CycleGAN to handle domain shift, a novel federated learning (FL) scheme to allow local client-based training with dataset protection, and employing bounding boxes to MRIs when tumor boundary annotations are not available. Experimental results showed that DCGAN generated MRIs have enlarged the original training dataset size and have improved the classification performance on test sets. CycleGAN showed good domain adaptation on multiple source datasets and improved the classification performance. The proposed FL scheme showed a slightly degraded performance as compare to that of central learning (CL) approach while protecting dataset privacy. Using tumor bounding boxes showed to be an alternative approach to tumor boundary annotation for tumor classification and segmentation, with a trade-off between a slight decrease in performance and saving time in manual marking by clinicians. The proposed methods may benefit the future research in bringing DL tools into clinical practice for assisting tumor diagnosis and help the decision making process.
  •  
3.
  • Isaksson, Martin, et al. (author)
  • Adaptive Expert Models for Federated Learning
  • 2023
  • In: <em>Lecture Notes in Computer Science </em>Volume 13448 Pages 1 - 16 2023. - Cham : Springer Science and Business Media Deutschland GmbH. - 9783031289958 ; 13448 LNAI, s. 1-16
  • Conference paper (peer-reviewed)abstract
    • Federated Learning (FL) is a promising framework for distributed learning when data is private and sensitive. However, the state-of-the-art solutions in this framework are not optimal when data is heterogeneous and non-IID. We propose a practical and robust approach to personalization in FL that adjusts to heterogeneous and non-IID data by balancing exploration and exploitation of several global models. To achieve our aim of personalization, we use a Mixture of Experts (MoE) that learns to group clients that are similar to each other, while using the global models more efficiently. We show that our approach achieves an accuracy up to 29.78% better than the state-of-the-art and up to 4.38% better compared to a local model in a pathological non-IID setting, even though we tune our approach in the IID setting. © 2023, The Author(s)
  •  
4.
  • Lv, Zhihan, Dr. 1984-, et al. (author)
  • Editorial : 5G for Augmented Reality
  • 2022
  • In: Mobile Networks and Applications. - : Springer. - 1383-469X .- 1572-8153.
  • Journal article (peer-reviewed)
  •  
5.
  • Norlund, Tobias, 1991, et al. (author)
  • Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it?
  • 2021
  • In: Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pp. 149-162, Punta Cana, Dominican Republic. - : Association for Computational Linguistics.
  • Conference paper (peer-reviewed)abstract
    • Large language models are known to suffer from the hallucination problem in that they are prone to output statements that are false or inconsistent, indicating a lack of knowledge. A proposed solution to this is to provide the model with additional data modalities that complements the knowledge obtained through text. We investigate the use of visual data to complement the knowledge of large language models by proposing a method for evaluating visual knowledge transfer to text for uni- or multimodal language models. The method is based on two steps, 1) a novel task querying for knowledge of memory colors, i.e. typical colors of well-known objects, and 2) filtering of model training data to clearly separate knowledge contributions. Additionally, we introduce a model architecture that involves a visual imagination step and evaluate it with our proposed method. We find that our method can successfully be used to measure visual knowledge transfer capabilities in models and that our novel model architecture shows promising results for leveraging multimodal knowledge in a unimodal setting.
  •  
6.
  • Gerken, Jan, 1991, et al. (author)
  • Equivariance versus augmentation for spherical images
  • 2022
  • In: Proceedings of Machine Learning Resaerch. ; , s. 7404-7421
  • Conference paper (peer-reviewed)abstract
    • We analyze the role of rotational equivariance in convolutional neural networks (CNNs) applied to spherical images. We compare the performance of the group equivariant networks known as S2CNNs and standard non-equivariant CNNs trained with an increasing amount of data augmentation. The chosen architectures can be considered baseline references for the respective design paradigms. Our models are trained and evaluated on single or multiple items from the MNIST- or FashionMNIST dataset projected onto the sphere. For the task of image classification, which is inherently rotationally invariant, we find that by considerably increasing the amount of data augmentation and the size of the networks, it is possible for the standard CNNs to reach at least the same performance as the equivariant network. In contrast, for the inherently equivariant task of semantic segmentation, the non-equivariant networks are consistently outperformed by the equivariant networks with significantly fewer parameters. We also analyze and compare the inference latency and training times of the different networks, enabling detailed tradeoff considerations between equivariant architectures and data augmentation for practical problems.
  •  
7.
  • Ge, Chenjie, 1991, et al. (author)
  • Enlarged Training Dataset by Pairwise GANs for Molecular-Based Brain Tumor Classification
  • 2020
  • In: IEEE Access. - 2169-3536 .- 2169-3536. ; 8:1, s. 22560-22570
  • Journal article (peer-reviewed)abstract
    • This paper addresses issues of brain tumor subtype classification using Magnetic Resonance Images (MRIs) from different scanner modalities like T1 weighted, T1 weighted with contrast-enhanced, T2 weighted and FLAIR images. Currently most available glioma datasets are relatively moderate in size, and often accompanied with incomplete MRIs in different modalities. To tackle the commonly encountered problems of insufficiently large brain tumor datasets and incomplete modality of image for deep learning, we propose to add augmented brain MR images to enlarge the training dataset by employing a pairwise Generative Adversarial Network (GAN) model. The pairwise GAN is able to generate synthetic MRIs across different modalities. To achieve the patient-level diagnostic result, we propose a post-processing strategy to combine the slice-level glioma subtype classification results by majority voting. A two-stage course-to-fine training strategy is proposed to learn the glioma feature using GAN-augmented MRIs followed by real MRIs. To evaluate the effectiveness of the proposed scheme, experiments have been conducted on a brain tumor dataset for classifying glioma molecular subtypes: isocitrate dehydrogenase 1 (IDH1) mutation and IDH1 wild-type. Our results on the dataset have shown good performance (with test accuracy 88.82%). Comparisons with several state-of-the-art methods are also included.
  •  
8.
  • Lv, Zhihan, Dr. 1984-, et al. (author)
  • 5G for mobile augmented reality
  • 2022
  • In: International Journal of Communication Systems. - : John Wiley & Sons. - 1074-5351 .- 1099-1131. ; 35:5
  • Journal article (other academic/artistic)
  •  
9.
  • Somanath, Sanjay, 1994, et al. (author)
  • Towards Urban Digital Twins: A Workflow for Procedural Visualization Using Geospatial Data
  • 2024
  • In: Remote Sensing. - 2072-4292. ; 16:11
  • Journal article (peer-reviewed)abstract
    • A key feature for urban digital twins (DTs) is an automatically generated detailed 3D representation of the built and unbuilt environment from aerial imagery, footprints, LiDAR, or a fusion of these. Such 3D models have applications in architecture, civil engineering, urban planning, construction, real estate, Geographical Information Systems (GIS), and many other areas. While the visualization of large-scale data in conjunction with the generated 3D models is often a recurring and resource-intensive task, an automated workflow is complex, requiring many steps to achieve a high-quality visualization. Methods for building reconstruction approaches have come a long way, from previously manual approaches to semi-automatic or automatic approaches. This paper aims to complement existing methods of 3D building generation. First, we present a literature review covering different options for procedural context generation and visualization methods, focusing on workflows and data pipelines. Next, we present a semi-automated workflow that extends the building reconstruction pipeline to include procedural context generation using Python and Unreal Engine. Finally, we propose a workflow for integrating various types of large-scale urban analysis data for visualization. We conclude with a series of challenges faced in achieving such pipelines and the limitations of the current approach. However, the steps for a complete, end-to-end solution involve further developing robust systems for building detection, rooftop recognition, and geometry generation and importing and visualizing data in the same 3D environment, highlighting a need for further research and development in this field.
  •  
10.
  • Frid, Emma, 1988-, et al. (author)
  • Perceptual Evaluation of Blended Sonification of Mechanical Robot Sounds Produced by Emotionally Expressive Gestures : Augmenting Consequential Sounds to Improve Non-verbal Robot Communication
  • 2021
  • In: International Journal of Social Robotics. - : Springer Nature. - 1875-4791 .- 1875-4805.
  • Journal article (peer-reviewed)abstract
    • This paper presents two experiments focusing on perception of mechanical sounds produced by expressive robot movement and blended sonifications thereof. In the first experiment, 31 participants evaluated emotions conveyed by robot sounds through free-form text descriptions. The sounds were inherently produced by the movements of a NAO robot and were not specifically designed for communicative purposes. Results suggested no strong coupling between the emotional expression of gestures and how sounds inherent to these movements were perceived by listeners; joyful gestures did not necessarily result in joyful sounds. A word that reoccurred in text descriptions of all sounds, regardless of the nature of the expressive gesture, was “stress”. In the second experiment, blended sonification was used to enhance and further clarify the emotional expression of the robot sounds evaluated in the first experiment. Analysis of quantitative ratings of 30 participants revealed that the blended sonification successfully contributed to enhancement of the emotional message for sound models designed to convey frustration and joy. Our findings suggest that blended sonification guided by perceptual research on emotion in speech and music can successfully improve communication of emotions through robot sounds in auditory-only conditions.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 1287
Type of publication
conference paper (579)
journal article (549)
doctoral thesis (63)
book chapter (24)
research review (19)
reports (18)
show more...
licentiate thesis (16)
other publication (12)
editorial collection (3)
artistic work (3)
book (1)
patent (1)
show less...
Type of content
peer-reviewed (1116)
other academic/artistic (165)
pop. science, debate, etc. (1)
Author/Editor
Khan, Fahad (37)
Khan, Salman (34)
Nikolakopoulos, Geor ... (32)
Liwicki, Marcus (24)
Kragic, Danica, 1971 ... (21)
Oskarsson, Magnus (17)
show more...
Svensson, Lennart, 1 ... (17)
Felsberg, Michael, 1 ... (17)
Åström, Kalle (16)
Sattler, Torsten, 19 ... (16)
Pollefeys, Marc (16)
Zach, Christopher, 1 ... (16)
Kanellakis, Christof ... (15)
Khan, Fahad Shahbaz, ... (15)
Andreasson, Henrik, ... (14)
Kahl, Fredrik, 1972 (14)
Sladoje, Nataša (14)
Anwer, Rao Muhammad (14)
Felsberg, Michael (14)
Heyden, Anders (13)
Shah, Mubarak (12)
Lv, Zhihan, Dr. 1984 ... (12)
Larsson, Viktor (12)
Karayiannidis, Yiann ... (12)
Lindblad, Joakim (12)
Ho, Luis C. (11)
Danelljan, Martin (11)
Kjellström, Hedvig, ... (11)
Wymeersch, Henk, 197 ... (10)
Loutfi, Amy, 1978- (10)
Stricker, Didier (10)
Conway, John, 1963 (10)
Berger, Christian, 1 ... (10)
Bekiroglu, Yasemin, ... (10)
Cholakkal, Hisham (10)
Löfstedt, Tommy (10)
Afzal, Muhammad Zesh ... (9)
Britzen, Silke (9)
Broderick, Avery E. (9)
Chen, Yongjun (9)
Cui, Yuzhu (9)
Fromm, Christian M. (9)
Galison, Peter (9)
Georgiev, Boris (9)
James, David J. (9)
Jeter, Britton (9)
Palmieri, Luigi (9)
Arras, Kai O. (9)
Servin, Martin (9)
Björkman, Mårten, 19 ... (9)
show less...
University
Chalmers University of Technology (314)
Royal Institute of Technology (273)
Linköping University (164)
Lund University (137)
Luleå University of Technology (101)
Uppsala University (90)
show more...
Örebro University (89)
Umeå University (65)
University of Gothenburg (57)
Halmstad University (27)
Blekinge Institute of Technology (25)
Mid Sweden University (21)
University of Skövde (18)
RISE (16)
Stockholm University (14)
Mälardalen University (14)
Linnaeus University (12)
Karolinska Institutet (12)
Swedish University of Agricultural Sciences (11)
Jönköping University (9)
University West (4)
Malmö University (3)
Högskolan Dalarna (2)
Stockholm University of the Arts (2)
Swedish National Defence College (1)
VTI - The Swedish National Road and Transport Research Institute (1)
IVL Swedish Environmental Research Institute (1)
show less...
Language
English (1283)
Swedish (4)
Research subject (UKÄ/SCB)
Natural sciences (1286)
Engineering and Technology (412)
Medical and Health Sciences (69)
Social Sciences (38)
Agricultural Sciences (23)
Humanities (22)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view