Search: onr:"swepub:oai:DiVA.org:kth-219648" >
Fairness-oriented a...
Fairness-oriented and location-aware NUCA for many-core SoC
- Article/chapterEnglish2017
Publisher, publication year, extent ...
-
2017-10-19
-
New York, NY, USA :Association for Computing Machinery (ACM),2017
-
printrdacarrier
Numbers
-
LIBRIS-ID:oai:DiVA.org:kth-219648
-
https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-219648URI
-
https://doi.org/10.1145/3130218.3130225DOI
Supplementary language notes
-
Language:English
-
Summary in:English
Part of subdatabase
Classification
-
Subject category:ref swepub-contenttype
-
Subject category:kon swepub-publicationtype
Notes
-
QC 20171212
-
Non-uniform cache architecture (NUCA) is often employed to organize the last level cache (LLC) by Networks-on-Chip (NoC). However, along with the scaling up for network size of Systems-on-Chip (SoC), two trends gradually begin to emerge. First, the network latency is becoming the major source of the cache access latency. Second, the communication distance and latency gap between different cores is increasing. Such gap can seriously cause the network latency imbalance problem, aggravate the degree of non-uniform for cache access latencies, and then worsen the system performance. In this paper, we propose a novel NUCA-based scheme, named fairness-oriented and location-aware NUCA (FL-NUCA), to alleviate the network latency imbalance problem and achieve more uniform cache access. We strive to equalize network latencies which are measured by three metrics: average latency (AL), latency standard deviation (LSD), and maximum latency (ML). In FL-NUCA, the memory-to-LLC mapping and links are both non-uniform distributed to better fit the network topology and traffics, thereby equalizing network latencies from two aspects, i.e., non-contention latencies and contention latencies, respectively. The experimental results show that FL-NUCA can effectively improve the fairness of network latencies. Compared with the traditional static NUCA (SNUCA), in simulation with synthetic traffics, the average improvements for AL, LSD, and ML are 20.9%, 36.3%, and 35.0%, respectively. In simulation with PARSEC benchmarks, the average improvements for AL, LSD, and ML are 6.3%, 3.6%, and 11.2%, respectively.
Subject headings and genre
Added entries (persons, corporate bodies, meetings, titles ...)
-
Chen, XiaowenKTH,Elektronik,National University of Defense Technology, China(Swepub:kth)u1a3w6t8
(author)
-
Li, C.
(author)
-
Guo, Y.
(author)
-
KTHElektronik
(creator_code:org_t)
Related titles
-
In:2017 11th IEEE/ACM International Symposium on Networks-on-Chip, NOCS 2017New York, NY, USA : Association for Computing Machinery (ACM)9781450349840
Internet link
Find in a library
To the university's database