SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "AMNE:(NATURVETENSKAP Data- och informationsvetenskap Datorseende och robotik) srt2:(2020-2024)"

Search: AMNE:(NATURVETENSKAP Data- och informationsvetenskap Datorseende och robotik) > (2020-2024)

  • Result 1-10 of 1365
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Blanch, Krister, 1991 (author)
  • Beyond-application datasets and automated fair benchmarking
  • 2023
  • Licentiate thesis (other academic/artistic)abstract
    • Beyond-application perception datasets are generalised datasets that emphasise the fundamental components of good machine perception data. When analysing the history of perception datatsets, notable trends suggest that design of the dataset typically aligns with an application goal. Instead of focusing on a specific application, beyond-application datasets instead look at capturing high-quality, high-volume data from a highly kinematic environment, for the purpose of aiding algorithm development and testing in general. Algorithm benchmarking is a cornerstone of autonomous systems development, and allows developers to demonstrate their results in a comparative manner. However, most benchmarking systems allow developers to use their own hardware or select favourable data. There is also little focus on run time performance and consistency, with benchmarking systems instead showcasing algorithm accuracy. By combining both beyond-application dataset generation and methods for fair benchmarking, there is also the dilemma of how to provide the dataset to developers for this benchmarking, as the result of a high-volume, high-quality dataset generation is a significant increase in dataset size when compared to traditional perception datasets. This thesis presents the first results of attempting the creation of such a dataset. The dataset was built using a maritime platform, selected due to the highly dynamic environment presented on water. The design and initial testing of this platform is detailed, as well as as methods of sensor validation. Continuing, the thesis then presents a method of fair benchmarking, by utilising remote containerisation in a way that allows developers to present their software to the dataset, instead of having to first locally store a copy. To test this dataset and automatic online benchmarking, a number of reference algorithms were required for initial results. Three algorithms were built, using the data from three different sensors captured on the maritime platform. Each algorithm calculates vessel odometry, and the automatic benchmarking system was utilised to show the accuracy and run-time performance of these algorithms. It was found that the containerised approach alleviated data management concerns, prevented inflated accuracy results, and demonstrated precisely how computationally intensive each algorithm was.
  •  
2.
  • Ali, Muhaddisa Barat, 1986 (author)
  • Deep Learning Methods for Classification of Gliomas and Their Molecular Subtypes, From Central Learning to Federated Learning
  • 2023
  • Doctoral thesis (other academic/artistic)abstract
    • The most common type of brain cancer in adults are gliomas. Under the updated 2016 World Health Organization (WHO) tumor classification in central nervous system (CNS), identification of molecular subtypes of gliomas is important. For low grade gliomas (LGGs), prediction of molecular subtypes by observing magnetic resonance imaging (MRI) scans might be difficult without taking biopsy. With the development of machine learning (ML) methods such as deep learning (DL), molecular based classification methods have shown promising results from MRI scans that may assist clinicians for prognosis and deciding on a treatment strategy. However, DL requires large amount of training datasets with tumor class labels and tumor boundary annotations. Manual annotation of tumor boundary is a time consuming and expensive process. The thesis is based on the work developed in five papers on gliomas and their molecular subtypes. We propose novel methods that provide improved performance.  The proposed methods consist of a multi-stream convolutional autoencoder (CAE)-based classifier, a deep convolutional generative adversarial network (DCGAN) to enlarge the training dataset, a CycleGAN to handle domain shift, a novel federated learning (FL) scheme to allow local client-based training with dataset protection, and employing bounding boxes to MRIs when tumor boundary annotations are not available. Experimental results showed that DCGAN generated MRIs have enlarged the original training dataset size and have improved the classification performance on test sets. CycleGAN showed good domain adaptation on multiple source datasets and improved the classification performance. The proposed FL scheme showed a slightly degraded performance as compare to that of central learning (CL) approach while protecting dataset privacy. Using tumor bounding boxes showed to be an alternative approach to tumor boundary annotation for tumor classification and segmentation, with a trade-off between a slight decrease in performance and saving time in manual marking by clinicians. The proposed methods may benefit the future research in bringing DL tools into clinical practice for assisting tumor diagnosis and help the decision making process.
  •  
3.
  • Isaksson, Martin, et al. (author)
  • Adaptive Expert Models for Federated Learning
  • 2023
  • In: <em>Lecture Notes in Computer Science </em>Volume 13448 Pages 1 - 16 2023. - Cham : Springer Science and Business Media Deutschland GmbH. - 9783031289958 ; 13448 LNAI, s. 1-16
  • Conference paper (peer-reviewed)abstract
    • Federated Learning (FL) is a promising framework for distributed learning when data is private and sensitive. However, the state-of-the-art solutions in this framework are not optimal when data is heterogeneous and non-IID. We propose a practical and robust approach to personalization in FL that adjusts to heterogeneous and non-IID data by balancing exploration and exploitation of several global models. To achieve our aim of personalization, we use a Mixture of Experts (MoE) that learns to group clients that are similar to each other, while using the global models more efficiently. We show that our approach achieves an accuracy up to 29.78% better than the state-of-the-art and up to 4.38% better compared to a local model in a pathological non-IID setting, even though we tune our approach in the IID setting. © 2023, The Author(s)
  •  
4.
  • Zhang, Chi, et al. (author)
  • Spatial-Temporal-Spectral LSTM: A Transferable Model for Pedestrian Trajectory Prediction
  • 2023
  • In: IEEE Transactions on Intelligent Vehicles. - : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 2379-8858 .- 2379-8904.
  • Journal article (peer-reviewed)abstract
    • Predicting the trajectories of pedestrians is critical for developing safe advanced driver assistance systems and autonomous driving systems. Most existing models for pedestrian trajectory prediction focused on a single dataset without considering the transferability to other previously unseen datasets. This leads to poor performance on new unseen datasets and hinders leveraging off-the-shelf labeled datasets and models. In this paper, we propose a transferable model, namely the “Spatial-Temporal-Spectral (STS) LSTM” model, that represents the motion pattern of pedestrians with spatial, temporal, and spectral domain information. Quantitative results and visualizations indicate that our proposed spatial-temporal-spectral representation enables the model to learn generic motion patterns and improves the performance on both source and target datasets. We reveal the transferability of three commonly used network structures, including long short-term memory networks (LSTMs), convolutional neural networks (CNNs), and Transformers, and employ the LSTM structure with negative log-likelihood loss in our model since it has the best transferability. The proposed STS LSTM model demonstrates good prediction accuracy when transferring to target datasets without any prior knowledge, and has a faster inference speed compared to the state-of-the-art models. Our work addresses the gap in learning knowledge from source datasets and transferring it to target datasets in the field of pedestrian trajectory prediction, and enables the reuse of publicly available off-the-shelf datasets.
  •  
5.
  • Lv, Zhihan, Dr. 1984-, et al. (author)
  • Editorial : 5G for Augmented Reality
  • 2022
  • In: Mobile Networks and Applications. - : Springer. - 1383-469X .- 1572-8153.
  • Journal article (peer-reviewed)
  •  
6.
  • Norlund, Tobias, 1991, et al. (author)
  • Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it?
  • 2021
  • In: Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pp. 149-162, Punta Cana, Dominican Republic. - : Association for Computational Linguistics.
  • Conference paper (peer-reviewed)abstract
    • Large language models are known to suffer from the hallucination problem in that they are prone to output statements that are false or inconsistent, indicating a lack of knowledge. A proposed solution to this is to provide the model with additional data modalities that complements the knowledge obtained through text. We investigate the use of visual data to complement the knowledge of large language models by proposing a method for evaluating visual knowledge transfer to text for uni- or multimodal language models. The method is based on two steps, 1) a novel task querying for knowledge of memory colors, i.e. typical colors of well-known objects, and 2) filtering of model training data to clearly separate knowledge contributions. Additionally, we introduce a model architecture that involves a visual imagination step and evaluate it with our proposed method. We find that our method can successfully be used to measure visual knowledge transfer capabilities in models and that our novel model architecture shows promising results for leveraging multimodal knowledge in a unimodal setting.
  •  
7.
  • Gerken, Jan, 1991, et al. (author)
  • Equivariance versus augmentation for spherical images
  • 2022
  • In: Proceedings of Machine Learning Resaerch. ; , s. 7404-7421
  • Conference paper (peer-reviewed)abstract
    • We analyze the role of rotational equivariance in convolutional neural networks (CNNs) applied to spherical images. We compare the performance of the group equivariant networks known as S2CNNs and standard non-equivariant CNNs trained with an increasing amount of data augmentation. The chosen architectures can be considered baseline references for the respective design paradigms. Our models are trained and evaluated on single or multiple items from the MNIST- or FashionMNIST dataset projected onto the sphere. For the task of image classification, which is inherently rotationally invariant, we find that by considerably increasing the amount of data augmentation and the size of the networks, it is possible for the standard CNNs to reach at least the same performance as the equivariant network. In contrast, for the inherently equivariant task of semantic segmentation, the non-equivariant networks are consistently outperformed by the equivariant networks with significantly fewer parameters. We also analyze and compare the inference latency and training times of the different networks, enabling detailed tradeoff considerations between equivariant architectures and data augmentation for practical problems.
  •  
8.
  • Ge, Chenjie, 1991, et al. (author)
  • Enlarged Training Dataset by Pairwise GANs for Molecular-Based Brain Tumor Classification
  • 2020
  • In: IEEE Access. - 2169-3536 .- 2169-3536. ; 8:1, s. 22560-22570
  • Journal article (peer-reviewed)abstract
    • This paper addresses issues of brain tumor subtype classification using Magnetic Resonance Images (MRIs) from different scanner modalities like T1 weighted, T1 weighted with contrast-enhanced, T2 weighted and FLAIR images. Currently most available glioma datasets are relatively moderate in size, and often accompanied with incomplete MRIs in different modalities. To tackle the commonly encountered problems of insufficiently large brain tumor datasets and incomplete modality of image for deep learning, we propose to add augmented brain MR images to enlarge the training dataset by employing a pairwise Generative Adversarial Network (GAN) model. The pairwise GAN is able to generate synthetic MRIs across different modalities. To achieve the patient-level diagnostic result, we propose a post-processing strategy to combine the slice-level glioma subtype classification results by majority voting. A two-stage course-to-fine training strategy is proposed to learn the glioma feature using GAN-augmented MRIs followed by real MRIs. To evaluate the effectiveness of the proposed scheme, experiments have been conducted on a brain tumor dataset for classifying glioma molecular subtypes: isocitrate dehydrogenase 1 (IDH1) mutation and IDH1 wild-type. Our results on the dataset have shown good performance (with test accuracy 88.82%). Comparisons with several state-of-the-art methods are also included.
  •  
9.
  • Lv, Zhihan, Dr. 1984-, et al. (author)
  • 5G for mobile augmented reality
  • 2022
  • In: International Journal of Communication Systems. - : John Wiley & Sons. - 1074-5351 .- 1099-1131. ; 35:5
  • Journal article (other academic/artistic)
  •  
10.
  • Somanath, Sanjay, 1994, et al. (author)
  • Towards Urban Digital Twins: A Workflow for Procedural Visualization Using Geospatial Data
  • 2024
  • In: Remote Sensing. - 2072-4292. ; 16:11
  • Journal article (peer-reviewed)abstract
    • A key feature for urban digital twins (DTs) is an automatically generated detailed 3D representation of the built and unbuilt environment from aerial imagery, footprints, LiDAR, or a fusion of these. Such 3D models have applications in architecture, civil engineering, urban planning, construction, real estate, Geographical Information Systems (GIS), and many other areas. While the visualization of large-scale data in conjunction with the generated 3D models is often a recurring and resource-intensive task, an automated workflow is complex, requiring many steps to achieve a high-quality visualization. Methods for building reconstruction approaches have come a long way, from previously manual approaches to semi-automatic or automatic approaches. This paper aims to complement existing methods of 3D building generation. First, we present a literature review covering different options for procedural context generation and visualization methods, focusing on workflows and data pipelines. Next, we present a semi-automated workflow that extends the building reconstruction pipeline to include procedural context generation using Python and Unreal Engine. Finally, we propose a workflow for integrating various types of large-scale urban analysis data for visualization. We conclude with a series of challenges faced in achieving such pipelines and the limitations of the current approach. However, the steps for a complete, end-to-end solution involve further developing robust systems for building detection, rooftop recognition, and geometry generation and importing and visualizing data in the same 3D environment, highlighting a need for further research and development in this field.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 1365
Type of publication
conference paper (620)
journal article (579)
doctoral thesis (66)
book chapter (25)
reports (21)
research review (19)
show more...
licentiate thesis (15)
other publication (13)
editorial collection (3)
artistic work (3)
book (1)
patent (1)
show less...
Type of content
peer-reviewed (1186)
other academic/artistic (173)
pop. science, debate, etc. (1)
Author/Editor
Khan, Fahad (38)
Nikolakopoulos, Geor ... (36)
Khan, Salman (35)
Liwicki, Marcus (26)
Kragic, Danica, 1971 ... (21)
Oskarsson, Magnus (18)
show more...
Svensson, Lennart, 1 ... (17)
Åström, Kalle (16)
Kanellakis, Christof ... (16)
Sattler, Torsten, 19 ... (16)
Pollefeys, Marc (16)
Zach, Christopher, 1 ... (16)
Andreasson, Henrik, ... (15)
Anwer, Rao Muhammad (15)
Kahl, Fredrik, 1972 (14)
Heyden, Anders (14)
Sladoje, Nataša (14)
Felsberg, Michael (14)
Shah, Mubarak (13)
Larsson, Viktor (13)
Karayiannidis, Yiann ... (12)
Lindblad, Joakim (12)
Lilienthal, Achim J. ... (11)
Ho, Luis C. (11)
Wymeersch, Henk, 197 ... (10)
Loutfi, Amy, 1978- (10)
Stricker, Didier (10)
Conway, John, 1963 (10)
Lv, Zhihan, Dr. 1984 ... (10)
Berger, Christian, 1 ... (10)
Jensfelt, Patric, 19 ... (10)
Arras, Kai O. (10)
Servin, Martin (10)
Bekiroglu, Yasemin, ... (10)
Mokayed, Hamam (9)
Magnusson, Martin, D ... (9)
Afzal, Muhammad Zesh ... (9)
Britzen, Silke (9)
Broderick, Avery E. (9)
Chen, Yongjun (9)
Cui, Yuzhu (9)
Fromm, Christian M. (9)
Galison, Peter (9)
Georgiev, Boris (9)
James, David J. (9)
Jeter, Britton (9)
Palmieri, Luigi (9)
Folkesson, John, Ass ... (9)
Göksel, Orcun (9)
Björkman, Mårten, 19 ... (9)
show less...
University
Chalmers University of Technology (322)
Royal Institute of Technology (293)
Linköping University (172)
Lund University (145)
Luleå University of Technology (114)
Örebro University (95)
show more...
Uppsala University (94)
Umeå University (73)
University of Gothenburg (56)
Halmstad University (31)
Blekinge Institute of Technology (26)
Mid Sweden University (22)
University of Skövde (21)
RISE (18)
Mälardalen University (14)
Stockholm University (13)
Linnaeus University (13)
Swedish University of Agricultural Sciences (13)
Karolinska Institutet (12)
Jönköping University (10)
Malmö University (5)
University West (4)
Högskolan Dalarna (2)
Stockholm University of the Arts (2)
Swedish National Defence College (1)
VTI - The Swedish National Road and Transport Research Institute (1)
IVL Swedish Environmental Research Institute (1)
show less...
Language
English (1361)
Swedish (4)
Research subject (UKÄ/SCB)
Natural sciences (1364)
Engineering and Technology (435)
Medical and Health Sciences (72)
Social Sciences (38)
Agricultural Sciences (25)
Humanities (24)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view