SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Kvalsvik Amund Bergland) "

Search: WFRF:(Kvalsvik Amund Bergland)

  • Result 1-3 of 3
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Aimoniotis, Pavlos, et al. (author)
  • Data-Out Instruction-In (DOIN!) : Leveraging Inclusive Caches to Attack Speculative Delay Schemes
  • 2022
  • In: 2022 IEEE International Symposium on Secure and Private Execution Environment Design (SEED 2022). - : Institute of Electrical and Electronics Engineers (IEEE). - 9781665485265 - 9781665485272 ; , s. 49-60
  • Conference paper (peer-reviewed)abstract
    • Although the cache has been a known side-channel for years, it has gained renewed notoriety with the introduction of speculative side-channel attacks such as Spectre, which were able to use caches to not just observe a victim, but to leak secrets. Because the cache continues to be one of the most exploitable side channels, it is often the primary target to safeguard in secure speculative execution schemes. One of the simpler secure speculation approaches is to delay speculative accesses whose effect can be observed until they become non-speculative. Delay-on-Miss, for example, delays all observable speculative loads, i.e., the ones that miss in the cache, and preserves the majority of the performance of the baseline (unsafe speculation) by executing speculative loads that hit in the cache, which were thought to be unobservable.However, previous work has failed to consider how instruction fetching can eject cache lines from the shared, lower level caches, and thus from higher cache levels due to inclusivity. In this work, we show how cache conflicts between instruction fetch and data accesses can extend previous attacks and present the following new insights:1. It is possible to use lower level caches to perform Prime+Probe through conflicts resulting from instruction fetching. This is an extension to previous Prime+Probe attacks that potentially avoids other developed mitigation strategies.2. Data-instruction conflicts can be used to perform a Spectre attack that breaks Delay-on-Miss. After acquiring a secret, secret-dependent instruction fetching can cause cache conflicts that result in evictions in the L1D cache, creating observable timing differences. Essentially, it is possible to leak a secret bit-by-bit through the cache, despite Delay-on-Miss defending against caches.We call our new attack Data-Out Instruction-In, DOIN!, and demonstrate it on a real commercial core, the AMD Ryzen 9. We demonstrate how DOIN! interacts with Delay-on-Miss and perform an analysis of noise and bandwidth. Furthermore, we propose a simple defense extension for Delay-on-Miss to maintain its security guarantees, at the cost of negligible performance degradation while executing the Spec06 workloads.
  •  
2.
  • Aimoniotis, Pavlos, et al. (author)
  • ReCon : Efficient Detection, Management, and Use of Non-Speculative Information Leakage
  • 2023
  • In: 56th IEEE/ACM International Symposium on Microarchitecture, MICRO 2023. - : Association for Computing Machinery (ACM). - 9798400703294 ; , s. 828-842
  • Conference paper (peer-reviewed)abstract
    • In a speculative side-channel attack, a secret is improperly accessed and then leaked by passing it to a transmitter instruction. Several proposed defenses effectively close this security hole by either delaying the secret from being loaded or propagated, or by delaying dependent transmitters (e.g., loads) from executing when fed with tainted input derived from an earlier speculative load. This results in a loss of memory-level parallelism and performance. A security definition proposed recently, in which data already leaked in non-speculative execution need not be considered secret during speculative execution, can provide a solution to the loss of performance. However, detecting and tracking non-speculative leakage carries its own cost, increasing complexity. The key insight of our work that enables us to exploit non-speculative leakage as an optimization to other secure speculation schemes is that the majority of non-speculative leakage is simply due to pointer dereferencing (or base-address indexing) - essentially what many secure speculation schemes prevent from taking place speculatively. We present ReCon that: i) efficiently detects non-speculative leakage by limiting detection to pairs of directly-dependent loads that dereference pointers (or index a base-address); and ii) piggybacks non-speculative leakage information on the coherence protocol. In ReCon, the coherence protocol remembers and propagates the knowledge of what has leaked and therefore what is safe to dereference under speculation. To demonstrate the effectiveness of ReCon, we show how two state-of-the-art secure speculation schemes, Non-speculative Data Access (NDA) and speculative Taint Tracking (STT), leverage this information to enable more memorylevel parallelism both in a single core scenario and in a multicore scenario: NDA with ReCon reduces the performance loss by 28.7% for SPEC2017, 31.5% for SPEC2006, and 46.7% for PARSEC; STT with ReCon reduces the loss by 45.1%, 39%, and 78.6%, respectively.
  •  
3.
  • Kvalsvik, Amund Bergland, et al. (author)
  • Doppelganger Loads : A Safe, Complexity-Effective Optimization for Secure Speculation Schemes
  • 2023
  • In: ISCA '23: Proceedings of the 50th Annual International Symposium on Computer Architecture. - New York, NY : Association for Computing Machinery (ACM). - 9798400700958
  • Conference paper (peer-reviewed)abstract
    • Speculative side-channel attacks have forced computer architects to rethink speculative execution. Effectively preventing microarchitectural state from leaking sensitive information will be a key requirement in future processor design.An important limitation of many secure speculation schemes is a reduction in the available memory parallelism, as unsafe loads (depending on the particular scheme) are blocked, as they might potentially leak information. Our contribution is to show that it is possible to recover some of this lost memory parallelism, by safely predicting the addresses of these loads in a threat-model transparent way, i.e., without worsening the security guarantees of the underlying secure scheme. To demonstrate the generality of the approach, we apply it to three different secure speculation schemes: Non-speculative Data Access (NDA), Speculative Taint Tracking (STT), and Delay-on-Miss (DoM).An address predictor is trained on non-speculative data, and can afterwards predict the addresses of unsafe slow-to-issue loads, preloading the target registers with speculative values, that can be released faster on correct predictions than starting the entire load process. This new perspective on speculative execution encompasses all loads, and gives speedups, separately from prefetching.We call the address-predicted counterparts of loads Doppelganger Loads. They give notable performance improvements for the three secure speculation schemes we evaluate, NDA, STT, and DoM. The Doppelganger Loads reduce the geometric mean slowdown by 42%, 48%, and 30% respectively, as compared to an unsafe baseline, for a wide variety of SPEC2006 and SPEC2017 benchmarks. Furthermore, Doppelganger Loads can be efficiently implemented with only minor core modifications, reusing existing resources such as a stride prefetcher, and most importantly, requiring no changes to the memory hierarchy outside the core.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-3 of 3
Type of publication
conference paper (3)
Type of content
peer-reviewed (3)
Author/Editor
Kaxiras, Stefanos (3)
Aimoniotis, Pavlos (3)
Kvalsvik, Amund Berg ... (3)
Själander, Magnus (3)
Chen, Xiaoyue (1)
University
Uppsala University (3)
Language
English (3)
Research subject (UKÄ/SCB)
Engineering and Technology (2)
Natural sciences (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view