SwePub
Sök i LIBRIS databas

  Utökad sökning

WFRF:(Kaxiras Stefanos Professor)
 

Sökning: WFRF:(Kaxiras Stefanos Professor) > (2019) > Leveraging Existing...

Leveraging Existing Microarchitectural Structures to Improve First-Level Caching Efficiency

Alves, Ricardo (författare)
Uppsala universitet,Avdelningen för datorteknik,Datorarkitektur och datorkommunikation
Black-Schaffer, David, Professor (preses)
Uppsala universitet,Datorarkitektur och datorkommunikation,Datorteknik
Kaxiras, Stefanos, Professor (preses)
Uppsala universitet,Datorarkitektur och datorkommunikation,Avdelningen för datorteknik,Datorteknik
visa fler...
Erez, Mattan, Professor (opponent)
Department of Electrical & Computer Engineering at The University of Texas at Austin (UTA)
visa färre...
 (creator_code:org_t)
ISBN 9789151306810
Uppsala : Acta Universitatis Upsaliensis, 2019
Engelska 42 s.
Serie: Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology, 1651-6214 ; 1821
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)
Abstract Ämnesord
Stäng  
  • Low-latency data access is essential for performance. To achieve this, processors use fast first-level caches combined with out-of-order execution, to decrease and hide memory access latency respectively. While these approaches are effective for performance, they cost significant energy, leading to the development of many techniques that require designers to trade-off performance and efficiency.Way-prediction and filter caches are two of the most common strategies for improving first-level cache energy efficiency while still minimizing latency. They both have compromises as way-prediction trades off some latency for better energy efficiency, while filter caches trade off some energy efficiency for lower latency. However, these strategies are not mutually exclusive. By borrowing elements from both, and taking into account SRAM memory layout limitations, we proposed a novel MRU-L0 cache that mitigates many of their shortcomings while preserving their benefits. Moreover, while first-level caches are tightly integrated into the cpu pipeline, existing work on these techniques largely ignores the impact they have on instruction scheduling. We show that the variable hit latency introduced by way-misspredictions causes instruction replays of load dependent instruction chains, which hurts performance and efficiency. We study this effect and propose a variable latency cache-hit instruction scheduler, that identifies potential misschedulings, reduces instruction replays, reduces negative performance impact, and further improves cache energy efficiency.Modern pipelines also employ sophisticated execution strategies to hide memory latency and improve performance. While their primary use is for performance and correctness, they require intermediate storage that can be used as a cache as well. In this work we demonstrate how the store-buffer, paired with the memory dependency predictor, can be used to efficiently cache dirty data; and how the physical register file, paired with a value predictor, can be used to efficiently cache clean data. These strategies not only improve both performance and energy, but do so with no additional storage and minimal additional complexity, since they recycle existing cpu structures to detect reuse, memory ordering violations, and misspeculations.

Ämnesord

NATURVETENSKAP  -- Data- och informationsvetenskap -- Datavetenskap (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Computer Sciences (hsv//eng)

Nyckelord

Energy Efficient Caching
Memory Architecture
Single Thread Performance
First-Level Caching
Out-of-Order Pipelines
Instruction Scheduling
Filter-Cache
Way-Prediction
Value-Prediction
Register-Sharing.

Publikations- och innehållstyp

vet (ämneskategori)
dok (ämneskategori)

Hitta via bibliotek

Till lärosätets databas

Sök utanför SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy