SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:1566 2535 OR L773:1872 6305 srt2:(2020-2024)"

Sökning: L773:1566 2535 OR L773:1872 6305 > (2020-2024)

  • Resultat 1-10 av 11
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Alonso-Fernandez, Fernando, 1978-, et al. (författare)
  • Cross-sensor periocular biometrics in a global pandemic : Comparative benchmark and novel multialgorithmic approach
  • 2022
  • Ingår i: Information Fusion. - Amsterdam : Elsevier. - 1566-2535 .- 1872-6305. ; 83-84, s. 110-130
  • Tidskriftsartikel (refereegranskat)abstract
    • The massive availability of cameras and personal devices results in a wide variability between imaging conditions, producing large intra-class variations and a significant performance drop if images from heterogeneous environments are compared for person recognition purposes. However, as biometric solutions are extensively deployed, it will be common to replace acquisition hardware as it is damaged or newer designs appear or to exchange information between agencies or applications operating in different environments. Furthermore, variations in imaging spectral bands can also occur. For example, face images are typically acquired in the visible (VIS) spectrum, while iris images are usually captured in the near-infrared (NIR) spectrum. However, cross-spectrum comparison may be needed if, for example, a face image obtained from a surveillance camera needs to be compared against a legacy database of iris imagery. Here, we propose a multialgorithmic approach to cope with periocular images captured with different sensors. With face masks in the front line to fight against the COVID-19 pandemic, periocular recognition is regaining popularity since it is the only region of the face that remains visible. As a solution to the mentioned cross-sensor issues, we integrate different biometric comparators using a score fusion scheme based on linear logistic regression This approach is trained to improve the discriminating ability and, at the same time, to encourage that fused scores are represented by log-likelihood ratios. This allows easy interpretation of output scores and the use of Bayes thresholds for optimal decision-making since scores from different comparators are in the same probabilistic range. We evaluate our approach in the context of the 1st Cross-Spectral Iris/Periocular Competition, whose aim was to compare person recognition approaches when periocular data from visible and near-infrared images is matched. The proposed fusion approach achieves reductions in the error rates of up to 30%–40% in cross-spectral NIR–VIS comparisons with respect to the best individual system, leading to an EER of 0.2% and a FRR of just 0.47% at FAR = 0.01%. It also represents the best overall approach of the mentioned competition. Experiments are also reported with a database of VIS images from two different smartphones as well, achieving even bigger relative improvements and similar performance numbers. We also discuss the proposed approach from the point of view of template size and computation times, with the most computationally heavy comparator playing an important role in the results. Lastly, the proposed method is shown to outperform other popular fusion approaches in multibiometrics, such as the average of scores, Support Vector Machines, or Random Forest. © 2022 The Authors
  •  
2.
  • Ding, Yijie, et al. (författare)
  • Multi-correntropy fusion based fuzzy system for predicting DNA N4-methylcytosine sites
  • 2023
  • Ingår i: Information Fusion. - Amsterdam : Elsevier. - 1566-2535 .- 1872-6305. ; 100, s. 1-10
  • Tidskriftsartikel (refereegranskat)abstract
    • The identification of DNA N4-methylcytosine (4mC) sites is an important field of bioinformatics. Statistical learning methods and deep learning have been applied in this direction. The previous methods focused on feature representation and feature selection, and did not take into account the deviation of noise samples for recognition. Moreover, these models were not established from the perspective of prediction error distribution. To solve the problem of complex error distribution, we propose a maximum multi-correntropy criterion based kernelized higher-order fuzzy inference system (MMC-KHFIS), which is constructed with multi-correntropy fusion. There are 6 4mC and 8 UCI data sets are employed to evaluate our model. The MMC-KHFIS achieves better performance in the experiment. © 2023
  •  
3.
  • Muzammal, Muhammad, et al. (författare)
  • A Multi-sensor Data Fusion Enabled Ensemble Approach for Medical Data from Body Sensor Networks
  • 2020
  • Ingår i: Information Fusion. - : Elsevier. - 1566-2535 .- 1872-6305. ; 53:2020, s. 155-164
  • Tidskriftsartikel (refereegranskat)abstract
    • Wireless Body Sensor Network (BSNs) are wearable sensors with varying sensing, storage, computation, and transmission capabilities. When data is obtained from multiple devices, multi-sensor fusion is desirable to transform potentially erroneous sensor data into high quality fused data. In this work, a data fusion enabled Ensemble approach is proposed to work with medical data obtained from BSNs in a fog computing environment. Daily activity data is obtained from a collection of sensors which is fused together to generate high quality activity data. The fused data is later input to an Ensemble classifier for early heart disease prediction. The ensembles are hosted in a Fog computing environment and the prediction computations are performed in a decentralised manners. The results from the individual nodes in the fog computing environment are then combined to produce a unified output. For the classification purpose, a novel kernel random forest ensemble is used that produces significantly better quality results than random forest. An extensive experimental study supports the applicability of the solution and the obtained results are promising, as we obtain 98% accuracy when the tree depth is equal to 15, number of estimators is 40, and 8 features are considered for the prediction task.
  •  
4.
  • Ning, Xin, et al. (författare)
  • DILF : Differentiable rendering-based multi-view Image–Language Fusion for zero-shot 3D shape understanding
  • 2024
  • Ingår i: Information Fusion. - Amsterdam : Elsevier. - 1566-2535 .- 1872-6305. ; 102, s. 1-12
  • Tidskriftsartikel (refereegranskat)abstract
    • Zero-shot 3D shape understanding aims to recognize “unseen” 3D categories that are not present in training data. Recently, Contrastive Language–Image Pre-training (CLIP) has shown promising open-world performance in zero-shot 3D shape understanding tasks by information fusion among language and 3D modality. It first renders 3D objects into multiple 2D image views and then learns to understand the semantic relationships between the textual descriptions and images, enabling the model to generalize to new and unseen categories. However, existing studies in zero-shot 3D shape understanding rely on predefined rendering parameters, resulting in repetitive, redundant, and low-quality views. This limitation hinders the model's ability to fully comprehend 3D shapes and adversely impacts the text–image fusion in a shared latent space. To this end, we propose a novel approach called Differentiable rendering-based multi-view Image–Language Fusion (DILF) for zero-shot 3D shape understanding. Specifically, DILF leverages large-scale language models (LLMs) to generate textual prompts enriched with 3D semantics and designs a differentiable renderer with learnable rendering parameters to produce representative multi-view images. These rendering parameters can be iteratively updated using a text–image fusion loss, which aids in parameters’ regression, allowing the model to determine the optimal viewpoint positions for each 3D object. Then a group-view mechanism is introduced to model interdependencies across views, enabling efficient information fusion to achieve a more comprehensive 3D shape understanding. Experimental results can demonstrate that DILF outperforms state-of-the-art methods for zero-shot 3D classification while maintaining competitive performance for standard 3D classification. The code is available at https://github.com/yuzaiyang123/DILP. © 2023 The Author(s)
  •  
5.
  • Qu, Zhiguo, et al. (författare)
  • Privacy protection in intelligent vehicle networking : A novel federated learning algorithm based on information fusion
  • 2023
  • Ingår i: Information Fusion. - Amsterdam : Elsevier. - 1566-2535 .- 1872-6305. ; 98
  • Tidskriftsartikel (refereegranskat)abstract
    • Federated learning is an effective technique to solve the problem of information fusion and information sharing in intelligent vehicle networking. However, most of the existing federated learning algorithms generally have the risk of privacy leakage. To address this security risk, this paper proposes a novel personalized federated learning with privacy preservation (PDP-PFL) algorithm based on information fusion. In the first stage of its execution, the new algorithm achieves personalized privacy protection by grading users’ privacy based on their privacy preferences and adding noise that satisfies their privacy preferences. In the second stage of its execution, PDP-PFL performs collaborative training of deep models among different in-vehicle terminals for personalized learning, using a lightweight dynamic convolutional network architecture without sharing the local data of each terminal. Instead of sharing all the parameters of the model as in standard federated learning, PDP-PFL keeps the last layer local, thus adding another layer of data confidentiality and making it difficult for the adversary to infer the image of the target vehicle terminal. It trains a personalized model for each vehicle terminal by “local fine-tuning”. Based on experiments, it is shown that the accuracy of the proposed new algorithm for PDP-PFL calculation can be comparable to or better than that of the FedAvg algorithm and the FedBN algorithm, while further enhancing the protection of data privacy. © 2023 Elsevier B.V.
  •  
6.
  • Qu, Zhiguo, et al. (författare)
  • QMFND : A quantum multimodal fusion-based fake news detection model for social media
  • 2024
  • Ingår i: Information Fusion. - Amsterdam : Elsevier. - 1566-2535 .- 1872-6305. ; 104
  • Tidskriftsartikel (refereegranskat)abstract
    • Fake news is frequently disseminated through social media, which significantly impacts public perception and individual decision-making. Accurate identification of fake news on social media is usually time-consuming, laborious, and difficult. Although the leveraging of machine learning technologies can facilitate automated authenticity checks, the time-sensitive and voluminous nature of the data brings considerable challenge for fake news detection. To address this issue, this paper proposes a quantum multimodal fusion-based model for fake news detection (QMFND). QMFND integrates the extracted images and textual features, and passes them through a proposed quantum convolutional neural network (QCNN) to obtain discriminative results. By testing QMFND on two social media datasets, Gossip and Politifact, it is proved that its detection performance is equal to or even surpasses that of classical models. The effects of various parameters are further investigated. The QCNN not only has good expressibility and entangling capability but also has good robustness against quantum noise. The code is available at © 2023 Elsevier B.V.
  •  
7.
  • Qu, Zhiguo, et al. (författare)
  • QNMF : A quantum neural network based multimodal fusion system for intelligent diagnosis
  • 2023
  • Ingår i: Information Fusion. - Amsterdam : Elsevier. - 1566-2535 .- 1872-6305. ; 100
  • Tidskriftsartikel (refereegranskat)abstract
    • The Internet of Medical Things (IoMT) has emerged as a significant research area in the medical field, enabling the transmission of various types of data to the cloud for analysis and diagnosis. Fusing data from multiple modalities can enhance accuracy but requires substantial computing power. Theoretically, quantum computers can rapidly process large volumes of high-dimensional medical data. Despite accelerated developments in quantum computing, research on quantum machine learning (QML) for multimodal data processing remains limited. Considering these factors, this paper presents a quantum neural network-based multimodal fusion system for intelligent diagnosis (QNMF) that can process multimodal medical data transmitted by IoMT devices, fuse data from different modalities, and improve the performance of intelligent diagnosis. This system employs a quantum convolutional neural network (QCNN) to efficiently extract features from medical images. These QCNN-based features are then fused with other modality features (such as blood test results or breast cell slices), and used to train an effective variational quantum classifier (VQC) for intelligent diagnosis. The experimental results demonstrate that a QCNN can effectively extract image data features. Furthermore, QNMF achieved an accuracy of 97.07% and 97.61% on breast cancer diagnosis and Covid-19 diagnosis experiments, respectively. In addition, the QNMF exhibits strong quantum noise robustness. © 2023 Elsevier B.V.
  •  
8.
  • Shao, Haidong, et al. (författare)
  • A novel approach of multisensory fusion to collaborative fault diagnosis in maintenance
  • 2021
  • Ingår i: Information Fusion. - : Elsevier. - 1566-2535 .- 1872-6305. ; 74, s. 65-76
  • Tidskriftsartikel (refereegranskat)abstract
    • Collaborative fault diagnosis can be facilitated by multisensory fusion technologies, as these can give more reliable results with a more complete data set. Although deep learning approaches have been developed to overcome the problem of relying on subjective experience in conventional fault diagnosis, there are two remaining obstacles to collaborative efficiency: integration of multisensory data and fusion of maintenance strategies. To overcome these obstacles, we propose a novel two-part approach: a stacked wavelet auto-encoder structure with a Morlet wavelet function for multisensory data fusion and a flexible weighted assignment of fusion strategies. Taking a planetary gearbox as an example, we use noisy vibration signals from multisensors to test the diagnosis performance of the proposed approach. The results demonstrate that it can provide more accurate and reliable fault diagnosis results than other approaches.
  •  
9.
  • Tiwari, Prayag, 1991-, et al. (författare)
  • Quantum Fuzzy Neural Network for multimodal sentiment and sarcasm detection
  • 2024
  • Ingår i: Information Fusion. - Amsterdam : Elsevier. - 1566-2535 .- 1872-6305. ; 103, s. 1-14
  • Tidskriftsartikel (refereegranskat)abstract
    • Sentiment and sarcasm detection in social media contribute to assessing social opinion trends. Over the years, most artificial intelligence (AI) methods have relied on real values to characterize the sentimental and sarcastic features in language. These methods often overlook the complexity and uncertainty of sentimental and sarcastic elements in human language. Therefore, this paper proposes the Quantum Fuzzy Neural Network (QFNN), a multimodal fusion and multitask learning algorithm with a Seq2Seq structure that combines Classical and Quantum Neural Networks (QNN), and fuzzy logic. Complex numbers are used in the Fuzzifier to capture sentiment and sarcasm features, and QNN are used in the Defuzzifier to obtain the prediction. The experiments are conducted on classical computers by constructing quantum circuits in a simulated noisy environment. The results show that QFNN can outperform several recent methods in sarcasm and sentiment detection task on two datasets (Mustard and Memotion). Moreover, by assessing the fidelity of quantum circuits in a noisy environment, QFNN was found to have excellent robustness. The QFNN circuit also possesses expressible and entanglement capabilities, proving effective in various settings. Our code is available at https://github.com/prayagtiwari/QFNN. © 2023 Elsevier B.V.
  •  
10.
  • Zhang, Yazhou, et al. (författare)
  • A Multitask learning model for multimodal sarcasm, sentiment and emotion recognition in conversations
  • 2023
  • Ingår i: Information Fusion. - Amsterdam : Elsevier. - 1566-2535 .- 1872-6305. ; 93, s. 282-301
  • Tidskriftsartikel (refereegranskat)abstract
    • Sarcasm, sentiment and emotion are tightly coupled with each other in that one helps the understanding of another, which makes the joint recognition of sarcasm, sentiment and emotion in conversation a focus in the research in artificial intelligence (AI) and affective computing. Three main challenges exist: Context dependency, multimodal fusion and multitask interaction. However, most of the existing works fail to explicitly leverage and model the relationships among related tasks. In this paper, we aim to generically address the three problems with a multimodal joint framework. We thus propose a multimodal multitask learning model based on the encoder–decoder architecture, termed M2Seq2Seq. At the heart of the encoder module are two attention mechanisms, i.e., intramodal (Ia) attention and intermodal (Ie) attention. Ia attention is designed to capture the contextual dependency between adjacent utterances, while Ie attention is designed to model multimodal interactions. In contrast, we design two kinds of multitask learning (MTL) decoders, i.e., single-level and multilevel decoders, to explore their potential. More specifically, the core of a single-level decoder is a masked outer-modal (Or) self-attention mechanism. The main motivation of Or attention is to explicitly model the interdependence among the tasks of sarcasm, sentiment and emotion recognition. The core of the multilevel decoder contains the shared gating and task-specific gating networks. Comprehensive experiments on four bench datasets, MUStARD, Memotion, CMU-MOSEI and MELD, prove the effectiveness of M2Seq2Seq over state-of-the-art baselines (e.g., CM-GCN, A-MTL) with significant improvements of 1.9%, 2.0%, 5.0%, 0.8%, 4.3%, 3.1%, 2.8%, 1.0%, 1.7% and 2.8% in terms of Micro F1. © 2023 Elsevier B.V.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 11

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy