SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Rohner Christian) srt2:(2020-2024)"

Sökning: WFRF:(Rohner Christian) > (2020-2024)

  • Resultat 1-10 av 32
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Borgström, Gustaf, PhD Student, 1984-, et al. (författare)
  • Faster Functional Warming with Cache Merging
  • 2022
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • Smarts-like sampled hardware simulation techniques achieve good accuracy by simulating many small portions of an application in detail. However, while this reduces the detailed simulation time, it results in extensive cache warming times, as each of the many simulation points requires warming the whole memory hierarchy. Adaptive Cache Warming reduces this time by iteratively increasing warming until achieving sufficient accuracy. Unfortunately, each time the warming increases, the previous warming must be redone, nearly doubling the required warming. We address re-warming by developing a technique to merge the cache states from the previous and additional warming iterations.We address re-warming by developing a technique to merge the cache states from the previous and additional warming iterations. We demonstrate our merging approach on multi-level LRU cache hierarchy and evaluate and address the introduced errors. By removing warming redundancy, we expect an ideal 2× warming speedup when using our Cache Merging solution together with Adaptive Cache Warming. Experiments show that Cache Merging delivers an average speedup of 1.44×, 1.84×, and 1.87× for 128kB, 2MB, and 8MB L2 caches, respectively, with 95-percentile absolute IPC errors of only 0.029, 0.015, and 0.006, respectively. These results demonstrate that Cache Merging yields significantly higher simulation speed with minimal losses.
  •  
2.
  • Borgström, Gustaf, PhD Student, 1984-, et al. (författare)
  • Faster FunctionalWarming with Cache Merging
  • 2023
  • Ingår i: PROCEEDINGS OF SYSTEM ENGINEERING FOR CONSTRAINED EMBEDDED SYSTEMS, DRONESE AND RAPIDO 2023. - : Association for Computing Machinery (ACM). - 9798400700453 ; , s. 39-47
  • Konferensbidrag (refereegranskat)abstract
    • Smarts-like sampled hardware simulation techniques achieve good accuracy by simulating many small portions of an application in detail. However, while this reduces the simulation time, it results in extensive cache warming times, as each of the many simulation points requires warming the whole memory hierarchy. Adaptive Cache Warming reduces this time by iteratively increasing warming to achieve sufficient accuracy. Unfortunately, each increases requires that the previous warming be redone, nearly doubling the total warming. We address re-warming by developing a technique to merge the cache states from the previous and additional warming iterations. We demonstrate our merging approach on multi-level LRU cache hierarchy and evaluate and address the introduced errors. Our experiments show that Cache Merging delivers an average speedup of 1.44x, 1.84x, and 1.87x for 128kB, 2MB, and 8MB L2 caches, respectively, (vs. a 2x theoretical maximum speedup) with 95-percentile absolute IPC errors of only 0.029, 0.015, and 0.006, respectively. These results demonstrate that Cache Merging yields significantly higher simulation speed with minimal losses.
  •  
3.
  • Borgström, Gustaf (författare)
  • Making Sampled Simulations Faster by Minimizing Warming Time
  • 2022
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • A computer system simulator is a fundamental tool for computer architects to try out brand new ideas or explore the system’s response to different configurations when executing different program codes. However, even simulating the CPU core in detail is time-consuming as the execution rate slows down by several orders of magnitude compared to native execution. To solve this problem, previous work, namely SMARTS, demonstrates a statistical sampling methodology that records measurements only from tiny samples throughout the simulation. It spends only a fraction of the full simulation time on these sample measurements. In-between detailed sample simulations, SMARTS fast-forwards in the simulation using a greatly simplified and much faster simulation model (compared to full detail), which maintains only necessary parts of the architecture, such as cache memory. This maintenance process is called warming. While warming is mandatory to keep the simulation accuracy high, caches may be sufficiently warm for an accurate simulation long before reaching the sample. In other words, much time may be wasted on warming in SMARTS.In this work, we show that caches can be kept in an accurate state with much less time spent on warming. The first paper presents Adaptive Cache Warming, a methodology for identifying the minimum amount of warming in an iterative process for every SMARTS sample. The rest of the simulation time, previously spent on warming, can be skipped by fast-forwarding between samples using native hardware execution of the code. Doing so will thus result in significantly faster statistically sampled simulation while maintaining accuracy. The second paper presents Cache Merging, which mitigates the redundant warmings introduced in Adaptive Cache Warming. We solve this issue by going back in time and merging the existing warming with a cache warming session that comes chronologically before the existing warming. By removing the redundant warming, we yield even more speedup. Together, Adaptive Cache Warming and Cache Merging is a powerful boost for statistically sampled simulations.
  •  
4.
  •  
5.
  •  
6.
  • Corneo, Lorenzo, et al. (författare)
  • (How Much) Can Edge Computing Change Network Latency?
  • 2021
  • Ingår i: 2021 IFIP Networking Conference (IFIP Networking). - : Institute of Electrical and Electronics Engineers (IEEE). - 9781665445016 - 9783903176393 ; , s. 1-9
  • Konferensbidrag (refereegranskat)abstract
    • Edge computing aims to enable applications with stringent latency requirements, e.g., augmented reality, and tame the overwhelming data streams generated by IoT devices. A core principle of this paradigm is to bring the computation from a distant cloud closer to service consumers and data producers. Consequentially, the issue of edge computing facilities’ placement arises. We present a comprehensive analysis suggesting where to place general-purpose edge computing resources on an Internet-wide scale. We base our conclusions on extensive real-world network measurements. We perform extensive traceroute measurements from RIPE Atlas to datacenters in the US, resulting in a graph of 11K routers. We identify the affiliations of the routers to determine the network providers that can act as edge providers. We devise several edge placement strategies and show that they can improve cloud access latency by up to 30%.
  •  
7.
  • Corneo, Lorenzo (författare)
  • Networked Latency Sensitive Applications - Performance Issues between Cloud and Edge
  • 2021
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The increasing demand for industrial automation has motivated the development of applications with strict latency requirements, namely, latency-sensitive applications. Such latency requirements can be satisfied by offloading computationally intensive tasks to powerful computing devices over a network at the cost of additional communication latency. Two major computing paradigms are considered for this: (i) cloud computing and (ii) edge computing. Cloud computing provides computation at remote datacenters, at the cost of longer communication latency. Edge computing aims at reducing communication latency by bringing computation closer to the users.  This doctoral dissertation mainly investigates relevant issues regarding communication latency trade-offs between the aforementioned paradigms in the context of latency-sensitive applications.This work advances the state of the art with three major contributions. First, we design a suite of scheduling algorithms which are performed on an edge device interposed between a co-located sensor network and remote applications running in cloud datacenters. These algorithms guarantee the fulfillment of latency-sensitive applications' requirements while maximizing the battery life of sensing devices.  Second, we estimate under what conditions latency-sensitive applications can be executed in cloud environments. From a broader perspective, we quantify round-trip times needed to access cloud datacenters all around the world. From a narrower perspective, we collect latency measurements to cloud datacenters in metropolitan areas where over 70% of the world's population lives. This Internet-wide large-scale measurements campaign allows us to draw statistically relevant conclusions concerning the readiness of the cloud environments to host latency-sensitive applications. Finally, we devise a method to quantify latency improvements that hypothetical edge server deployments could bring to users within a network. This is achieved with a thorough analysis of round-trip times and paths characterization resulting in the design of novel edge server placement algorithms. We show trade-offs between number of edge servers deployed and latency improvements experienced by users.This dissertation contributes to the understanding of the communication latency in terms of temporal and spacial distributions, its sources and implications on latency-sensitive applications.
  •  
8.
  • Engstrand, Johan, et al. (författare)
  • End-to-End Transmission of Physiological Data from Implanted Devices to a Cloud-Enabled Aggregator Using Fat Intra-Body Communication in a Live Porcine Model
  • 2022
  • Ingår i: 2022 16TH EUROPEAN CONFERENCE ON ANTENNAS AND PROPAGATION (EUCAP). - : Institute of Electrical and Electronics Engineers (IEEE). - 9788831299046 - 9781665416047
  • Konferensbidrag (refereegranskat)abstract
    • This article presents, for the first time, the end-to-end transmission of physiological data from implanted antennas mimicking sensors to a cloud-enabled aggregator device using fat intra-body communication (fat-IBC). The experiment was performed on a live porcine model in full accordance with ethical standards. Measurement data from two different sensors were collected and sent through a fat-IBC network. The fat-IBC network consisted of three nodes, of which two used antennas implanted in the fat tissue of a live porcine model and one used an on-body antenna placed on the skin. The sensor data was forwarded via Bluetooth Low Energy to an Intel Health Application Platform device, which in turn forwarded the encrypted data to a web server. The experimental results demonstrate that the fat channel can be used in an end-to-end communication scheme, which could involve relaying of sensor data from an implanted device to an external web server.
  •  
9.
  • Izquierdo Riera, Francisco Blas, 1988, et al. (författare)
  • Clipaha : A Scheme to Perform Password Stretching on the Client
  • 2023
  • Ingår i: Proceedings of the 9th International Conference on Information Systems Security and Privacy - ICISSP. - : Science and Technology Publications, Lda. - 2184-4356. - 9789897586248 ; , s. 58-69
  • Konferensbidrag (refereegranskat)abstract
    • Password security relies heavily on the choice of password by the user but also on the one-way hash functions used to protect stored passwords. To compensate for the increased computing power of attackers, modern password hash functions like Argon2, have been made more complex in terms of computational power and memory requirements. Nowadays, the computation of such hash functions is performed usually by the server (or authenticator) instead of the client. Therefore, constrained Internet of Things devices cannot use such functions when authenticating users. Additionally, the load of computing such functions may expose servers to denial of service attacks. In this work, we discuss client-side hashing as an alternative. We propose Clipaha, a client-side hashing scheme that allows using high-security password hashing even on highly constrained server devices. Clipaha is robust to a broader range of attacks compared to previous work and covers important and complex usage scenarios. Our evaluation discusses critical aspects involved in client-side hashing. We also provide an implementation of Clipaha in the form of a web library1 and benchmark the library on different systems to understand its mixed JavaScript and WebAssembly approach’s limitations. Benchmarks show that our library is 50% faster than similar libraries and can run on some devices where previous work fails. © 2023 by SCITEPRESS – Science and Technology Publications, Lda. Under CC license (CC BY-NC-ND 4.0).
  •  
10.
  • Kaveh, Amin, et al. (författare)
  • Defining and measuring probabilistic ego networks
  • 2021
  • Ingår i: Social Network Analysis and Mining. - : Springer Science and Business Media LLC. - 1869-5450 .- 1869-5469. ; 11
  • Tidskriftsartikel (refereegranskat)abstract
    • Analyzing ego networks to investigate local properties and behaviors of individuals is a fundamental task in social network research. In this paper we show that there is not a unique way of defining ego networks when the existence of edges is uncertain, since there are two different ways of defining the neighborhood of a node in such network models. Therefore, we introduce two definitions of probabilistic ego networks, called V-Alters-Ego and F-Alters-Ego, both rooted in the literature. Following that, we investigate three fundamental measures (degree, betweenness and closeness) for each definition. We also propose a method to approximate betweenness of an ego node among the neighbors which are connected via shortest paths with length 2. We show that this approximation method is faster to compute and it has high correlation with ego betweenness under the V-Alters-Ego definition in many datasets. Therefore, it can be a reasonable alternative to represent the extent to which a node plays the role of an intermediate node among its neighbors.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 32
Typ av publikation
konferensbidrag (19)
tidskriftsartikel (6)
doktorsavhandling (3)
rapport (2)
annan publikation (1)
licentiatavhandling (1)
visa fler...
visa färre...
Typ av innehåll
refereegranskat (24)
övrigt vetenskapligt/konstnärligt (8)
Författare/redaktör
Rohner, Christian, P ... (21)
Voigt, Thiemo (15)
Rohner, Christian (11)
Magnani, Matteo (4)
Yan, Wenqing, Ph.D. ... (4)
Kaveh, Amin (4)
visa fler...
Black-Schaffer, Davi ... (3)
Augustine, Robin, 19 ... (3)
Corneo, Lorenzo (3)
Gunningberg, Per, Pr ... (3)
Pérez-Penichet, Carl ... (3)
Yan, Wenqing, 1994- (3)
Almgren, Magnus, 197 ... (2)
Perez, Mauricio D. (2)
Mandal, Bappaditya (2)
Borgström, Gustaf, P ... (2)
Zavodovski, Aleksand ... (2)
Hylamia, Sam (2)
Anastasiadi, Elli (1)
Johnsson, Andreas (1)
Song, Weining (1)
Brunström, Anna, Pro ... (1)
Varshney, Ambuj (1)
Picazo-Sanchez, Pabl ... (1)
Scheffler, Matthias (1)
Asan, Noor Badariah (1)
Joseph, Laya (1)
Wang, Chao (1)
Tarasov, Andrey (1)
Kraus, Peter (1)
Borgström, Gustaf (1)
Podobas, Artur, Assi ... (1)
Richter, Sven (1)
Trunschke, Annette (1)
Pratsch, Christoph (1)
Dong, Jinhu (1)
Chou, Po-Hsuan (1)
Mohan, Nitinder (1)
Wong, Walter (1)
Kangasharju, Jussi (1)
Kangasharju, Jussi, ... (1)
Hewage, Kasun (1)
Engstrand, Johan (1)
Lidén, Johan (1)
Girgsdies, Frank (1)
Lunkenbein, Thomas (1)
Knop-Gericke, Axel (1)
Schlögl, Robert (1)
Izquierdo Riera, Fra ... (1)
Gionis, Aristides, P ... (1)
visa färre...
Lärosäte
Uppsala universitet (29)
RISE (8)
Chalmers tekniska högskola (3)
Högskolan i Halmstad (1)
Språk
Engelska (32)
Forskningsämne (UKÄ/SCB)
Teknik (23)
Naturvetenskap (11)
Samhällsvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy