SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "(db:Swepub) pers:(Chen Xiaowen) srt2:(2017)"

Sökning: (db:Swepub) pers:(Chen Xiaowen) > (2017)

  • Resultat 1-3 av 3
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Chen, Xiaowen, et al. (författare)
  • Round-trip DRAM access fairness in 3D NoC-based many-core systems
  • 2017
  • Ingår i: ACM Transactions on Embedded Computing Systems. - : Association for Computing Machinery. - 1539-9087 .- 1558-3465. ; 16:5s
  • Tidskriftsartikel (refereegranskat)abstract
    • In 3D NoC-based many-core systems, DRAM accesses behave differently due to their different communication distances and the latency gap of different DRAM accesses becomes bigger as the network size increases, which leads to unfair DRAM access performance among different nodes. This phenomenon may lead to high latencies for some DRAM accesses that become the performance bottleneck of the system. The paper addresses the DRAM access fairness problem in 3D NoC-based many-core systems by narrowing the latency difference of DRAM accesses as well as reducing the maximum latency. Firstly, the latency of a round-trip DRAM access is modeled and the factors causing DRAM access latency difference are discussed in detail. Secondly, the DRAM access fairness is further quantitatively analyzed through experiments. Thirdly, we propose to predict the network latency of round-trip DRAM accesses and use the predicted round-trip DRAM access time as the basis to prioritize the DRAM accesses in DRAM interfaces so that the DRAM accesses with potential high latencies can be transferred as early and fast as possible, thus achieving fair DRAM access. Experiments with synthetic and application workloads validate that our approach can achieve fair DRAM access and outperform the traditional First-Come-First-Serve (FCFS) scheduling policy and the scheduling policies proposed by reference [7] and [24] in terms of maximum latency, Latency Standard Deviation (LSD)1 and speedup. In the experiments, the maximum improvement of the maximum latency, LSD, and speedup are 12.8%, 6.57%, and 8.3% respectively. Besides, our proposal brings very small extra hardware overhead (<0.6%) in comparison to the three counterparts.
  •  
2.
  • Wang, Z., et al. (författare)
  • Fairness-oriented and location-aware NUCA for many-core SoC
  • 2017
  • Ingår i: 2017 11th IEEE/ACM International Symposium on Networks-on-Chip, NOCS 2017. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450349840
  • Konferensbidrag (refereegranskat)abstract
    • Non-uniform cache architecture (NUCA) is often employed to organize the last level cache (LLC) by Networks-on-Chip (NoC). However, along with the scaling up for network size of Systems-on-Chip (SoC), two trends gradually begin to emerge. First, the network latency is becoming the major source of the cache access latency. Second, the communication distance and latency gap between different cores is increasing. Such gap can seriously cause the network latency imbalance problem, aggravate the degree of non-uniform for cache access latencies, and then worsen the system performance. In this paper, we propose a novel NUCA-based scheme, named fairness-oriented and location-aware NUCA (FL-NUCA), to alleviate the network latency imbalance problem and achieve more uniform cache access. We strive to equalize network latencies which are measured by three metrics: average latency (AL), latency standard deviation (LSD), and maximum latency (ML). In FL-NUCA, the memory-to-LLC mapping and links are both non-uniform distributed to better fit the network topology and traffics, thereby equalizing network latencies from two aspects, i.e., non-contention latencies and contention latencies, respectively. The experimental results show that FL-NUCA can effectively improve the fairness of network latencies. Compared with the traditional static NUCA (SNUCA), in simulation with synthetic traffics, the average improvements for AL, LSD, and ML are 20.9%, 36.3%, and 35.0%, respectively. In simulation with PARSEC benchmarks, the average improvements for AL, LSD, and ML are 6.3%, 3.6%, and 11.2%, respectively.
  •  
3.
  • Wang, Z., et al. (författare)
  • Fairness-oriented switch allocation for networks-on-chip
  • 2017
  • Ingår i: 2017 30th IEEE International System-on-Chip Conference (SOCC). - : IEEE Computer Society. - 9781538640333 ; , s. 304-309
  • Konferensbidrag (refereegranskat)abstract
    • Networks-on-Chip (NoC) is becoming the backbone of modern chip multiprocessor (CMP) systems. However, with the number of integrated cores increasing and the network size scaling up, the network-latency imbalance is becoming an important problem, which seriously influences the performance of the network and system. In this paper, we aim to alleviate this problem by optimizing the design of switch allocation. We propose fairness-oriented switch allocation (FOSA), a novel switch allocation strategy to achieve uniform network latencies. FOSA can improve system performance by achieving remarkable improvement in balancing network latencies. We evaluate the network and system performance of FOSA with synthetic traffics and SPEC CPU2006 benchmarks in a full-system simulator. Compared with the canonical separable switch allocator (Round-Robin) and the recently proposed switch allocator (TS-Router), the experiments with benchmarks show that our approach decreases maximum latency (ML) by 45.6% and 15.1%, respectively, as well as latency standard deviation (LSD) by 13.8% and 3.9%, respectively. Besides this, FOSA improves system throughput by 0.8% over that of TS-Router. Finally, we synthesize FOSA and give an evaluation of the additional consumption of area and power.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-3 av 3
Typ av publikation
konferensbidrag (2)
tidskriftsartikel (1)
Typ av innehåll
refereegranskat (3)
Författare/redaktör
Chen, Xiaowen (3)
Guo, Y (2)
Wang, Z. (2)
Li, C. (2)
Chen, S. (1)
Liu, S. (1)
visa fler...
Lu, Zhonghai (1)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (3)
Språk
Engelska (3)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (2)
Teknik (1)
År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy