SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Ait Mlouk Addi 1990 ) "

Sökning: WFRF:(Ait Mlouk Addi 1990 )

  • Resultat 1-3 av 3
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ait-Mlouk, Addi, 1990-, et al. (författare)
  • FedBot : Enhancing Privacy in Chatbots with Federated Learning
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Chatbots are mainly data-driven and usually based on utterances that might be sensitive. However, training deep learning models on shared data can violate user privacy. Such issues have commonly existed in chatbots since their inception. In the literature, there have been many approaches to deal with privacy, such as differential privacy and secure multi-party computation, but most of them need to have access to users' data. In this context, Federated Learning (FL) aims to protect data privacy through distributed learning methods that keep the data in its location. This paper presents Fedbot, a proof-of-concept (POC) privacy-preserving chatbot that leverages large-scale customer support data. The POC combines Deep Bidirectional Transformer models and federated learning algorithms to protect customer data privacy during collaborative model training. The results of the proof-of-concept showcase the potential for privacy-preserving chatbots to transform the customer support industry by delivering personalized and efficient customer service that meets data privacy regulations and legal requirements. Furthermore, the system is specifically designed to improve its performance and accuracy over time by leveraging its ability to learn from previous interactions.
  •  
2.
  • Alawadi, Sadi, 1983-, et al. (författare)
  • Toward efficient resource utilization at edge nodes in federated learning
  • 2024
  • Ingår i: Progress in Artificial Intelligence. - : Springer Science+Business Media B.V.. - 2192-6352 .- 2192-6360. ; 13:2, s. 101-117
  • Tidskriftsartikel (refereegranskat)abstract
    • Federated learning (FL) enables edge nodes to collaboratively contribute to constructing a global model without sharing their data. This is accomplished by devices computing local, private model updates that are then aggregated by a server. However, computational resource constraints and network communication can become a severe bottleneck for larger model sizes typical for deep learning (DL) applications. Edge nodes tend to have limited hardware resources (RAM, CPU), and the network bandwidth and reliability at the edge is a concern for scaling federated fleet applications. In this paper, we propose and evaluate a FL strategy inspired by transfer learning in order to reduce resource utilization on devices, as well as the load on the server and network in each global training round. For each local model update, we randomly select layers to train, freezing the remaining part of the model. In doing so, we can reduce both server load and communication costs per round by excluding all untrained layer weights from being transferred to the server. The goal of this study is to empirically explore the potential trade-off between resource utilization on devices and global model convergence under the proposed strategy. We implement the approach using the FL framework FEDn. A number of experiments were carried out over different datasets (CIFAR-10, CASA, and IMDB), performing different tasks using different DL model architectures. Our results show that training the model partially can accelerate the training process, efficiently utilizes resources on-device, and reduce the data transmission by around 75% and 53% when we train 25%, and 50% of the model layers, respectively, without harming the resulting global model accuracy. Furthermore, our results demonstrate a negative correlation between the number of participating clients in the training process and the number of layers that need to be trained on each client’s side. As the number of clients increases, there is a decrease in the required number of layers. This observation highlights the potential of the approach, particularly in cross-device use cases. © The Author(s) 2024.
  •  
3.
  • Heitz, Thomas, et al. (författare)
  • Investigation on eXtreme Gradient Boosting for cutting force prediction in milling
  • 2023
  • Ingår i: Journal of Intelligent Manufacturing. - : Springer. - 0956-5515 .- 1572-8145.
  • Tidskriftsartikel (refereegranskat)abstract
    • Accurate prediction of cutting forces is critical in milling operations, with implications for cost reduction and improved manufacturing efficiency. While traditional mechanistic models provide high accuracy, their reliance on extensive milling data for force coefficient fitting poses challenges. The eXtreme Gradient Boosting algorithm offers a potential solution with reduced data requirements, yet the optimal utilization of eXtreme Gradient Boosting remains unexplored. This study investigates its effectiveness in predicting cutting forces during down-milling of Al2024. A novel framework is proposed optimizing its precision, efficiency, and user-friendliness. The model training incorporates the mechanistic force model in both time and frequency domains as new features. Through rigorous experimentation, various aspects of the eXtreme Gradient Boosting configuration are explored, including identifying the optimal number of periods for the training dataset, determining the best normalization and scaling technique, and assessing the hyperparameters’ impact on model performance in terms of accuracy and computational time. The results show the remarkable effectiveness of the eXtreme Gradient Boosting model with an average normalized root mean square error of 14.7%, surpassing the 21.9% obtained by the mechanistic force model. Additionally, the machine learning model could capture the runout effect. These findings enable optimized milling operations regarding cost, accuracy and computation time.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-3 av 3

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy