SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Corneo Lorenzo) "

Sökning: WFRF:(Corneo Lorenzo)

  • Resultat 1-10 av 16
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  • Corneo, Lorenzo, et al. (författare)
  • Age of Information-Aware Scheduling for Timely and Scalable Internet of Things Applications
  • 2019
  • Ingår i: IEEE Conference On Computer Communications (IEEE INFOCOM 2019). - 9781728105154 ; , s. 2476-2484
  • Konferensbidrag (refereegranskat)abstract
    • We consider large scale Internet of Things applications requesting data from physical devices. We study the problem of timely dissemination of sensor data towards applications with freshness requirements by means of a cache. We aim to minimize direct access to the possibly battery powered physical devices, yet improving Age of Information as a data freshness metric. We propose an Age of Information-aware scheduling policy for the physical device to push sensor updates to the caches located in cloud data centers. Such policy groups application requests based on freshness thresholds, thereby reduces the number of requests and threshold misses, and accounts for delay variation. The policy is incrementally introduced as we study its behavior over ideal and more realistic communication links with delay variation. We numerically evaluate the proposed policy against a simple yet widely used periodic schedule. We show that our informed schedule outperforms the periodic schedule even under high delay variations.
  •  
3.
  •  
4.
  • Corneo, Lorenzo, et al. (författare)
  • (How Much) Can Edge Computing Change Network Latency?
  • 2021
  • Ingår i: 2021 IFIP Networking Conference (IFIP Networking). - : Institute of Electrical and Electronics Engineers (IEEE). - 9781665445016 - 9783903176393 ; , s. 1-9
  • Konferensbidrag (refereegranskat)abstract
    • Edge computing aims to enable applications with stringent latency requirements, e.g., augmented reality, and tame the overwhelming data streams generated by IoT devices. A core principle of this paradigm is to bring the computation from a distant cloud closer to service consumers and data producers. Consequentially, the issue of edge computing facilities’ placement arises. We present a comprehensive analysis suggesting where to place general-purpose edge computing resources on an Internet-wide scale. We base our conclusions on extensive real-world network measurements. We perform extensive traceroute measurements from RIPE Atlas to datacenters in the US, resulting in a graph of 11K routers. We identify the affiliations of the routers to determine the network providers that can act as edge providers. We devise several edge placement strategies and show that they can improve cloud access latency by up to 30%.
  •  
5.
  • Corneo, Lorenzo (författare)
  • Networked Latency Sensitive Applications - Performance Issues between Cloud and Edge
  • 2021
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • The increasing demand for industrial automation has motivated the development of applications with strict latency requirements, namely, latency-sensitive applications. Such latency requirements can be satisfied by offloading computationally intensive tasks to powerful computing devices over a network at the cost of additional communication latency. Two major computing paradigms are considered for this: (i) cloud computing and (ii) edge computing. Cloud computing provides computation at remote datacenters, at the cost of longer communication latency. Edge computing aims at reducing communication latency by bringing computation closer to the users.  This doctoral dissertation mainly investigates relevant issues regarding communication latency trade-offs between the aforementioned paradigms in the context of latency-sensitive applications.This work advances the state of the art with three major contributions. First, we design a suite of scheduling algorithms which are performed on an edge device interposed between a co-located sensor network and remote applications running in cloud datacenters. These algorithms guarantee the fulfillment of latency-sensitive applications' requirements while maximizing the battery life of sensing devices.  Second, we estimate under what conditions latency-sensitive applications can be executed in cloud environments. From a broader perspective, we quantify round-trip times needed to access cloud datacenters all around the world. From a narrower perspective, we collect latency measurements to cloud datacenters in metropolitan areas where over 70% of the world's population lives. This Internet-wide large-scale measurements campaign allows us to draw statistically relevant conclusions concerning the readiness of the cloud environments to host latency-sensitive applications. Finally, we devise a method to quantify latency improvements that hypothetical edge server deployments could bring to users within a network. This is achieved with a thorough analysis of round-trip times and paths characterization resulting in the design of novel edge server placement algorithms. We show trade-offs between number of edge servers deployed and latency improvements experienced by users.This dissertation contributes to the understanding of the communication latency in terms of temporal and spacial distributions, its sources and implications on latency-sensitive applications.
  •  
6.
  • Corneo, Lorenzo, et al. (författare)
  • Scheduling at the Edge for Assisting Cloud Real-Time Systems
  • 2018
  • Ingår i: Proceedings of the 2018 Workshop on Theory and Practice for Integrated Cloud, Fog and Edge Computing Paradigms. - New York, NY, USA : ACM. - 9781450357760 ; , s. 9-14
  • Konferensbidrag (refereegranskat)abstract
    • We study edge server support for multiple periodic real-time applications located in different clouds. The edge communicates both with sensor devices over wireless sensor networks and with applications over Internet type networks. The edge caches sensor data and can respond to multiple applications with different timing requirements to the data. The purpose of caching is to reduce the number of multiple direct accesses to the sensor since sensor communication is very energy expensive. However, the data will then age in the cache and eventually become stale for some application. A push update method and the concept of age of information is used to schedule data updates to the applications. An aging model for periodic updates is derived. We propose that the scheduling should take into account periodic sensor updates, the differences in the periodic application updates, the aging in the cache and communication variance. By numerical analysis we study the number of deadline misses for two different scheduling policies with respect to different periods.
  •  
7.
  • Corneo, Lorenzo, et al. (författare)
  • Service Level Agreements for Safe and Configurable Production Environments
  • 2018
  • Ingår i: IEEE International Conference on Emerging Technologies and Factory Automation, ETFA. - : IEEE. - 1946-0759 .- 1946-0740.
  • Konferensbidrag (refereegranskat)abstract
    • This paper focuses on Service Level Agreements (SLAs) for industrial applications that aim to port some of the control functionalities to the cloud. In such applications, industrial requirements should be reflected in SLAs. In this paper, we present an approach to integrate safety-related aspects of an industrial application to SLAs. We also present the approach in a use case. This is an initial attempt to enrich SLAs for industrial settings to consider safety aspects, which has not been investigated thoroughly before.
  •  
8.
  • Corneo, Lorenzo, et al. (författare)
  • Surrounded by the Clouds : A Comprehensive Cloud Reachability Study
  • 2021
  • Ingår i: Proceedings Of The World Wide Web Conference 2021 (WWW 2021). - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450383127 ; , s. 295-304, s. 295-304
  • Konferensbidrag (refereegranskat)abstract
    • In the early days of cloud computing, datacenters were sparsely deployed at distant locations far from end-users with high end-toend communication latency. However, today's cloud datacenters have become more geographically spread, the bandwidth of the networks keeps increasing, pushing the end-users latency down. In this paper, we provide a comprehensive cloud reachability study as we perform extensive global client-to-cloud latency measurements towards 189 datacenters from all major cloud providers. We leverage the well-known measurement platform RIPE Atlas, involving up to 8500 probes deployed in heterogeneous environments, e.g., home and offices. Our goal is to evaluate the suitability of modern cloud environments for various current and predicted applications. We achieve this by comparing our latency measurements against known human perception thresholds and are able to draw inferences on the suitability of current clouds for novel applications, such as augmented reality. Our results indicate that the current cloud coverage can easily support several latency-critical applications, like cloud gaming, for the majority of the world's population.
  •  
9.
  •  
10.
  • Dang, The Khang, et al. (författare)
  • Cloudy with a chance of short RTTs : analyzing cloud connectivity in the internet
  • 2021
  • Ingår i: IMC '21. - New York, NY, USA : ACM Digital Library. - 9781450391290 ; , s. 62-79
  • Konferensbidrag (refereegranskat)abstract
    • Cloud computing has seen continuous growth over the last decade. The recent rise in popularity of next-generation applications brings forth the question: "Can current cloud infrastructure support the low latency requirements of such apps?" Specifically, the interplay of wireless last-mile and investments of cloud operators in setting up direct peering agreements with ISPs globally to current cloud reachability and latency has remained largely unexplored.This paper investigates the state of end-user to cloud connectivity over wireless media through extensive measurements over six months. We leverage 115,000 wireless probes on the Speed-checker platform and 195 cloud regions from 9 well-established cloud providers. We evaluate the suitability of current cloud infrastructure to meet the needs of emerging applications and highlight various hindering pressure points. We also compare our results to a previous study over RIPE Atlas. Our key findings are: (i) the most impact on latency comes from the geographical distance to the datacenter; (ii) the choice of a measurement platform can significantly influence the results; (iii) wireless last-mile access contributes significantly to the overall latency, almost surpassing the impact of the geographical distance in many cases. We also observe that cloud providers with their own private network backbone and direct peering agreements with serving ISPs offer noticeable improvements in latency, especially in its consistency over longer distances.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 16

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy