SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:2192 6352 OR L773:2192 6360 "

Sökning: L773:2192 6352 OR L773:2192 6360

  • Resultat 1-5 av 5
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Alawadi, Sadi, et al. (författare)
  • Toward efficient resource utilization at edge nodes in federated learning
  • 2024
  • Ingår i: Progress in Artificial Intelligence. - : Springer Nature. - 2192-6352 .- 2192-6360.
  • Tidskriftsartikel (refereegranskat)abstract
    • Federated learning (FL) enables edge nodes to collaboratively contribute to constructing a global model without sharing their data. This is accomplished by devices computing local, private model updates that are then aggregated by a server. However, computational resource constraints and network communication can become a severe bottleneck for larger model sizes typical for deep learning (DL) applications. Edge nodes tend to have limited hardware resources (RAM, CPU), and the network bandwidth and reliability at the edge is a concern for scaling federated fleet applications. In this paper, we propose and evaluate a FL strategy inspired by transfer learning in order to reduce resource utilization on devices, as well as the load on the server and network in each global training round. For each local model update, we randomly select layers to train, freezing the remaining part of the model. In doing so, we can reduce both server load and communication costs per round by excluding all untrained layer weights from being transferred to the server. The goal of this study is to empirically explore the potential trade-off between resource utilization on devices and global model convergence under the proposed strategy. We implement the approach using the FL framework FEDn. A number of experiments were carried out over different datasets (CIFAR-10, CASA, and IMDB), performing different tasks using different DL model architectures. Our results show that training the model partially can accelerate the training process, efficiently utilizes resources on-device, and reduce the data transmission by around 75% and 53% when we train 25%, and 50% of the model layers, respectively, without harming the resulting global model accuracy. Furthermore, our results demonstrate a negative correlation between the number of participating clients in the training process and the number of layers that need to be trained on each client’s side. As the number of clients increases, there is a decrease in the required number of layers. This observation highlights the potential of the approach, particularly in cross-device use cases.
  •  
2.
  • Boman, Magnus, et al. (författare)
  • Learning machines in Internet-delivered psychological treatment
  • 2019
  • Ingår i: Progress in Artificial Intelligence. - : Springer Verlag. - 2192-6352 .- 2192-6360. ; 8:4, s. 475-485
  • Tidskriftsartikel (refereegranskat)abstract
    • A learning machine, in the form of a gating network that governs a finite number of different machine learning methods, is described at the conceptual level with examples of concrete prediction subtasks. A historical data set with data from over 5000 patients in Internet-based psychological treatment will be used to equip healthcare staff with decision support for questions pertaining to ongoing and future cases in clinical care for depression, social anxiety, and panic disorder. The organizational knowledge graph is used to inform the weight adjustment of the gating network and for routing subtasks to the different methods employed locally for prediction. The result is an operational model for assisting therapists in their clinical work, about to be subjected to validation in a clinical trial.
  •  
3.
  • Bozorgpanah, Aso, et al. (författare)
  • Explainable machine learning models with privacy
  • 2024
  • Ingår i: Progress in Artificial Intelligence. - : Springer. - 2192-6352 .- 2192-6360. ; 13, s. 31-50
  • Tidskriftsartikel (refereegranskat)abstract
    • The importance of explainable machine learning models is increasing because users want to understand the reasons behind decisions in data-driven models. Interpretability and explainability emerge from this need to design comprehensible systems. This paper focuses on privacy-preserving explainable machine learning. We study two data masking techniques: maximum distance to average vector (MDAV) and additive noise. The former is for achieving k-anonymity, and the second uses Laplacian noise to avoid record leakage and provide a level of differential privacy. We are interested in the process of developing data-driven models that, at the same time, make explainable decisions and are privacy-preserving. That is, we want to avoid the decision-making process leading to disclosure. To that end, we propose building models from anonymized data. More particularly, data that are k-anonymous or that have been anonymized add an appropriate level of noise to satisfy some differential privacy requirements. In this paper, we study how explainability has been affected by these data protection procedures. We use TreeSHAP as our technique for explainability. The experiments show that we can keep up to a certain degree both accuracy and explainability. So, our results show that some trade-off between privacy and explainability is possible for data protection using k-anonymity and noise addition.
  •  
4.
  • Ghoorchian, Kambiz, 1981-, et al. (författare)
  • GDTM: Graph-based Dynamic Topic Models
  • 2020
  • Ingår i: Progress in Artificial Intelligence. - : Springer Nature. - 2192-6352 .- 2192-6360. ; 9, s. 195-207
  • Tidskriftsartikel (refereegranskat)abstract
    • Dynamic Topic Modeling (DTM) is the ultimate solution for extracting topics from short texts generated in Online Social Networks (OSNs) like Twitter. A DTM solution is required to be scalable and to be able to account for sparsity in short texts and dynamicity of topics. Current solutions combine probabilistic mixture models like Dirichlet Multinomial or PitmanYor Process with approximate inference approaches like Gibbs Sampling and Stochastic Variational Inference to, respectively, account for dynamicity and scalability in DTM. However, these solutions rely on weak probabilistic language models, which do not account for sparsity in short texts. In addition, their inference is based on iterative optimization algorithms, which have scalability issues when it comes to DTM. We present GDTM, a single-pass graph-based DTM algorithm, to solve the problem. GDTM combines a context-rich and incremental feature representation model, called Random Indexing (RI), with a novel online graph partitioning algorithm to address scalability and dynamicity. In addition, GDTM uses a rich language modeling approach based on the Skip-gram technique to account for sparsity. We run multiple experiments over a large-scale Twitter dataset to analyze the accuracy and scalability of GDTM and compare the results with four state-of-the-art approaches. The results show that GDTM outperforms the best approach by 11% on accuracy and performs by an order of magnitude faster while creating 4 times better topic quality over standard evaluation metrics.
  •  
5.
  • Torra, Vicenç, et al. (författare)
  • The space of models in machine learning : using Markov chains to model transitions
  • 2021
  • Ingår i: Progress in Artificial Intelligence. - : Springer. - 2192-6352 .- 2192-6360. ; 10:3, s. 321-332
  • Tidskriftsartikel (refereegranskat)abstract
    • Machine and statistical learning is about constructing models from data. Data is usually understood as a set of records, a database. Nevertheless, databases are not static but change over time. We can understand this as follows: there is a space of possible databases and a database during its lifetime transits this space. Therefore, we may consider transitions between databases, and the database space. NoSQL databases also fit with this representation. In addition, when we learn models from databases, we can also consider the space of models. Naturally, there are relationships between the space of data and the space of models. Any transition in the space of data may correspond to a transition in the space of models. We argue that a better understanding of the space of data and the space of models, as well as the relationships between these two spaces is basic for machine and statistical learning. The relationship between these two spaces can be exploited in several contexts as, e.g., in model selection and data privacy. We consider that this relationship between spaces is also fundamental to understand generalization and overfitting. In this paper, we develop these ideas. Then, we consider a distance on the space of models based on a distance on the space of data. More particularly, we consider distance distribution functions and probabilistic metric spaces on the space of data and the space of models. Our modelization of changes in databases is based on Markov chains and transition matrices. This modelization is used in the definition of distances. We provide examples of our definitions. 
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-5 av 5

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy