SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Ye Fanghua) "

Sökning: WFRF:(Ye Fanghua)

  • Resultat 1-4 av 4
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Gu, Huaduo, et al. (författare)
  • Étude des performances et optimisation de la conception de nouveaux compresseurs rotatifs à palettes coulissantes à cylindre rotatif
  • 2022
  • Ingår i: International Journal of Refrigeration. - : Elsevier BV. - 0140-7007. ; 142, s. 137-147
  • Tidskriftsartikel (refereegranskat)abstract
    • Novel rotating-cylinder sliding vane rotary compressors (RC-SVRCs) are promising machines in energy conversion systems due to extremely low friction. However, three-dimensional visual investigations on geometrical optimizations of RC-SVRCs are highly limited. Besides, most flow mechanisms and leakage flow affected by geometric structures have not been fully revealed yet. Therefore, the sliding vane number, gap heights at vane tips and length-to-diameter ratio are numerically studied in this work. The results show that more sliding vanes (4∼12) lead to more uniform flow and temperature fields, lower mass flow rates and outlet temperature, less leakage between rotor and rotating cylinder. The mass flow rate, leakage mL1 between rotor and cylinder and leakage mL2 at vane tip of 8-sliding-vane scheme are 2.944 × 10−2, 9.949 × 10−4 and 3.692 × 10−5 kg·s−1, respectively. Gap height at vane tips has vital effects on flow fields. As the gap increases (0.01∼0.05 mm), mass flow rate and isentropic efficiency are decreased by 12.68% and 18.26%, the outlet temperature, leakage mL1 and mL2 are increased by 91.5 K, 69.60% and 22 times, respectively. High-temperature areas and leakage flow increase with the increase of cylinder length, which is recommended no longer than 0.6 diameters. This work allows making recommendations for designing RC-SVRCs.
  •  
2.
  • Li, Shenghui, 1994-, et al. (författare)
  • Auto-Weighted Robust Federated Learning with Corrupted Data Sources
  • 2022
  • Ingår i: ACM Transactions on Intelligent Systems and Technology. - : Association for Computing Machinery. - 2157-6904 .- 2157-6912. ; 13:5
  • Tidskriftsartikel (refereegranskat)abstract
    • Federated learning provides a communication-efficient and privacy-preserving training process by enabling learning statistical models with massive participants without accessing their local data. Standard federated learning techniques that naively minimize an average loss function are vulnerable to data corruptions from outliers, systematic mislabeling, or even adversaries. In this article, we address this challenge by proposing Auto-weighted Robust Federated Learning (ARFL), a novel approach that jointly learns the global model and the weights of local updates to provide robustness against corrupted data sources. We prove a learning bound on the expected loss with respect to the predictor and the weights of clients, which guides the definition of the objective for robust federated learning. We present an objective that minimizes the weighted sum of empirical risk of clients with a regularization term, where the weights can be allocated by comparing the empirical risk of each client with the average empirical risk of the best ( p ) clients. This method can downweight the clients with significantly higher losses, thereby lowering their contributions to the global model. We show that this approach achieves robustness when the data of corrupted clients is distributed differently from the benign ones. To optimize the objective function, we propose a communication-efficient algorithm based on the blockwise minimization paradigm. We conduct extensive experiments on multiple benchmark datasets, including CIFAR-10, FEMNIST, and Shakespeare, considering different neural network models. The results show that our solution is robust against different scenarios, including label shuffling, label flipping, and noisy features, and outperforms the state-of-the-art methods in most scenarios.
  •  
3.
  • Li, Shenghui, 1994-, et al. (författare)
  • Blades : A Unified Benchmark Suite for Byzantine Attacks and Defenses in Federated Learning
  • 2024
  • Konferensbidrag (refereegranskat)abstract
    • Federated learning (FL) facilitates distributed training across different IoT and edge devices, safeguarding the privacy of their data. The inherent distributed structure of FL introduces vulnerabilities, especially from adversarial devices aiming to skew local updates to their advantage. Despite the plethora of research focusing on Byzantine-resilient FL, the academic community has yet to establish a comprehensive benchmark suite, pivotal for impartial assessment and comparison of different techniques. This paper presents Blades, a scalable, extensible, and easily configurable benchmark suite that supports researchers and developers in efficiently implementing and validating novel strategies against baseline algorithms in Byzantine-resilient FL. Blades contains built-in implementations of representative attack and defense strategies and offers a user-friendly interface that seamlessly integrates new ideas. Using Blades, we re-evaluate representative attacks and defenses on wide-ranging experimental configurations (approximately 1,500 trials in total). Through our extensive experiments, we gained new insights into FL robustness and highlighted previously overlooked limitations due to the absence of thorough evaluations and comparisons of baselines under various attack settings. We maintain the source code and documents at https://github.com/lishenghui/blades.
  •  
4.
  • Ye, Fanghua, et al. (författare)
  • Slot Self-Attentive Dialogue State Tracking
  • 2021
  • Ingår i: Proceedings of the  World Wide Web Conference 2021 (WWW 2021). - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450383127 ; , s. 1598-1608
  • Konferensbidrag (refereegranskat)abstract
    • An indispensable component in task-oriented dialogue systems is the dialogue state tracker, which keeps track of users' intentions in the course of conversation. The typical approach towards this goal is to fill in multiple pre-defined slots that are essential to complete the task. Although various dialogue state tracking methods have been proposed in recent years, most of them predict the value of each slot separately and fail to consider the correlations among slots. In this paper, we propose a slot self-attention mechanism that can learn the slot correlations automatically. Specifically, a slot-token attention is first utilized to obtain slot-specific features from the dialogue context. Then a stacked slot self-attention is applied on these features to learn the correlations among slots. We conduct comprehensive experiments on two multi-domain task-oriented dialogue datasets, including MultiWOZ 2.0 and MultiWOZ 2.1. The experimental results demonstrate that our approach achieves state-of-the-art performance on both datasets, verifying the necessity and effectiveness of taking slot correlations into consideration.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-4 av 4

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy