SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Prodan Radu) "

Sökning: WFRF:(Prodan Radu)

  • Resultat 1-15 av 15
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Aamodt, K., et al. (författare)
  • The ALICE experiment at the CERN LHC
  • 2008
  • Ingår i: Journal of Instrumentation. - 1748-0221. ; 3:S08002
  • Forskningsöversikt (refereegranskat)abstract
    • ALICE (A Large Ion Collider Experiment) is a general-purpose, heavy-ion detector at the CERN LHC which focuses on QCD, the strong-interaction sector of the Standard Model. It is designed to address the physics of strongly interacting matter and the quark-gluon plasma at extreme values of energy density and temperature in nucleus-nucleus collisions. Besides running with Pb ions, the physics programme includes collisions with lighter ions, lower energy running and dedicated proton-nucleus runs. ALICE will also take data with proton beams at the top LHC energy to collect reference data for the heavy-ion programme and to address several QCD topics for which ALICE is complementary to the other LHC detectors. The ALICE detector has been built by a collaboration including currently over 1000 physicists and engineers from 105 Institutes in 30 countries, Its overall dimensions are 16 x 16 x 26 m(3) with a total weight of approximately 10 000 t. The experiment consists of 18 different detector systems each with its own specific technology choice and design constraints, driven both by the physics requirements and the experimental conditions expected at LHC. The most stringent design constraint is to cope with the extreme particle multiplicity anticipated in central Pb-Pb collisions. The different subsystems were optimized to provide high-momentum resolution as well as excellent Particle Identification (PID) over a broad range in momentum, up to the highest multiplicities predicted for LHC. This will allow for comprehensive studies of hadrons, electrons, muons, and photons produced in the collision of heavy nuclei. Most detector systems are scheduled to be installed and ready for data taking by mid-2008 when the LHC is scheduled to start operation, with the exception of parts of the Photon Spectrometer (PHOS), Transition Radiation Detector (TRD) and Electro Magnetic Calorimeter (EMCal). These detectors will be completed for the high-luminosity ion run expected in 2010. This paper describes in detail the detector components as installed for the first data taking in the summer of 2008.
  •  
2.
  • Bakhshi Valojerdi, Zeinab, 1986-, et al. (författare)
  • Evaluation of Storage Placement in Computing Continuum for a Robotic Application : A Simulation-Based Performance Analysis
  • 2024
  • Ingår i: Journal of Grid Computing. - : Springer Science+Business Media B.V.. - 1570-7873 .- 1572-9184. ; 22:2
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper analyzes the timing performance of a persistent storage designed for distributed container-based architectures in industrial control applications. The timing performance analysis is conducted using an in-house simulator, which mirrors our testbed specifications. The storage ensures data availability and consistency even in presence of faults. The analysis considers four aspects: 1. placement strategy, 2. design options, 3. data size, and 4. evaluation under faulty conditions. Experimental results considering the timing constraints in industrial applications indicate that the storage solution can meet critical deadlines, particularly under specific failure patterns. Comparison results also reveal that, while the method may underperform current centralized solutions in fault-free conditions, it outperforms the centralized solutions in failure scenario. Moreover, the used evaluation method is applicable for assessing other container-based critical applications with timing constraints that require persistent storage.
  •  
3.
  • Bakhshi Valojerdi, Zeinab, 1986- (författare)
  • Persistent Fault-Tolerant Storage at the Fog Layer
  • 2021
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Clouds are powerful computer centers that provide computing and storage facilities that can be remotely accessed. The flexibility and cost-efficiency offered by clouds have made them very popular for business and web applications. The use of clouds is now being extended to safety-critical applications such as factories. However, cloud services do not provide time predictability which creates a hassle for such time-sensitive applications. Moreover, delays in the data communication between clouds and the devices the clouds control are unpredictable. Therefore, to increase predictability an intermediate layer between devices and the cloud is introduced. This layer, the Fog layer, aims to provide computational resources closer to the edge of the network. However, the fog computing paradigm relies on resource-constrained nodes, creating new potential challenges in resource management, scalability, and reliability. Solutions such as lightweight virtualization technologies can be leveraged for solving the dichotomy between performance and reliability in fog computing. In this context, container-based virtualization is a key technology providing lightweight virtualization for cloud computing that can be applied in fog computing as well. Such container-based technologies provide fault tolerance mechanisms that improve the reliability and availability of application execution.  By the study of a robotic use-case, we have realized that persistent data storage for stateful applications at the fog layer is particularly important. In addition, we identified the need to enhance the current container orchestration solution to fit fog applications executing in container-based architectures. In this thesis, we identify open challenges in achieving dependable fog platforms. Among these, we focus particularly on scalable, lightweight virtualization, auto-recovery, and re-integration solutions after failures in fog applications and nodes. We implement a testbed to deploy our use-case on a container-based fog platform and investigate the fulfillment of key dependability requirements. We enhance the architecture and identify the lack of persistent storage for stateful applications as an important impediment for the execution of control applications. We propose a solution for persistent fault-tolerant storage at the fog layer, which dissociates storage from applications to reduce application load and separates the concern of distributed storage. Our solution includes a replicated data structure supported by a consensus protocol that ensures distributed data consistency and fault tolerance in case of node failures. Finally, we use the UPPAAL verification tool to model and verify the fault tolerance and consistency of our solution.
  •  
4.
  • Bakhshi, Zeinab, 1986-, et al. (författare)
  • Storage placement in continuum computing for a robotic application
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • This paper analyzes the timing performance of a persistent storage designed for distributed containerbased architectures in industrial control applications. The storage ensures data availability andconsistency while accommodating faults. The analysis considers four aspects: 1. placement strategy,2. design options, 3. data size, and 4. evaluation under faulty conditions. Experimental results considering the timing constraints in industrial applications indicate that the storage solution can meet criticaldeadlines, particularly under specific failure patterns. Moreover, this evaluation method is applicablefor assessing other container-based critical applications with timing constraints that require persistentstorage. Further comparison results reveal that, while the method may underperform current centralized solutions under fault-free conditions, it outperforms the centralized solutions in failure scenarios
  •  
5.
  • Fahringer, Thomas, et al. (författare)
  • ASKALON : a tool set for clusterand Grid computing
  • 2005
  • Ingår i: Concurrency and Computation. - : Wiley. - 1532-0626 .- 1532-0634. ; 17:2-4, s. 143-169
  • Tidskriftsartikel (refereegranskat)abstract
    • Performance engineering of parallel and distributed applications is a complex task that iterates through various phases, ranging from modeling and prediction, to performance measurement, experiment management, data collection, and bottleneck analysis. There is no evidence so far that all of these phases should/can be integrated into a single monolithic tool. Moreover, the emergence of computational Grids as a common single wide-area platform for high-performance computing raises the idea to provide tools as interacting Grid services that share resources, support interoperability among different users and tools, and, most importantly, provide omnipresent services over the Grid. We have developed the ASKALON tool set to support performance-oriented development of parallel and distributed (Grid) applications. ASKALON comprises four tools, coherently integrated into a service-oriented architecture. SCALEA is a performance instrumentation, measurement, and analysis tool of parallel and distributed applications. ZENTURIO is a general purpose experiment management tool with advanced support for multi-experiment performance analysis and parameter studies. AKSUM provides semi-automatic high-level performance bottleneck detection through a special-purpose performance property specification language. The PerformanceProphet enables the user to model and predict the performance of parallel applications at the early stages of development. In this paper we describe the overall architecture of the ASKALON tool set and outline the basic functionality of the four constituent tools. The structure of each tool is based on the composition and sharing of remote Grid services, thus enabling tool interoperability. In addition, a data repository allows the tools to share the common application performance and output data that have been derived by the individual tools. A service repository is used to store common portable Grid service implementations. A general-purpose Factory service is employed to create service instances on arbitrary remote Grid sites. Discovering and dynamically binding to existing remote services is achieved through registry services. The ASKALON visualization diagrams support both online and post-mortem visualization of performance and output data. We demonstrate the usefulness and effectiveness of ASKALON by applying the tools to real-world applications.
  •  
6.
  • Khan, Akif Quddus, et al. (författare)
  • Cloud storage cost: a taxonomy and survey
  • 2024
  • Ingår i: World wide web (Bussum). - : Springer. - 1386-145X .- 1573-1413. ; 27:4
  • Tidskriftsartikel (refereegranskat)abstract
    • Cloud service providers offer application providers with virtually infinite storage and computing resources, while providing cost-efficiency and various other quality of service (QoS) properties through a storage-as-a-service (StaaS) approach. Organizations also use multi-cloud or hybrid solutions by combining multiple public and/or private cloud service providers to avoid vendor lock-in, achieve high availability and performance, and optimise cost. Indeed cost is one of the important factors for organizations while adopting cloud storage; however, cloud storage providers offer complex pricing policies, including the actual storage cost and the cost related to additional services (e.g., network usage cost). In this article, we provide a detailed taxonomy of cloud storage cost and a taxonomy of other QoS elements, such as network performance, availability, and reliability. We also discuss various cost trade-offs, including storage and computation, storage and cache, and storage and network. Finally, we provide a cost comparison across different storage providers under different contexts and a set of user scenarios to demonstrate the complexity of cost structure and discuss existing literature for cloud storage selection and cost optimization. We aim that the work presented in this article will provide decision-makers and researchers focusing on cloud storage selection for data placement, cost modelling, and cost optimization with a better understanding and insights regarding the elements contributing to the storage cost and this complex problem domain.
  •  
7.
  • Khan, Akif Quddus, et al. (författare)
  • Smart Data Placement for Big Data Pipelines : An Approach based on the Storage-as-a-Service Model
  • 2022
  • Ingår i: 2022 IEEE/ACM 15TH INTERNATIONAL CONFERENCE ON UTILITY AND CLOUD COMPUTING, UCC. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 317-320
  • Konferensbidrag (refereegranskat)abstract
    • The development of big data pipelines is a challenging task, especially when data storage is considered as part of the data pipelines. Local storage is expensive, hard to maintain, comes with several challenges (e.g., data availability, data security, and backup). The use of cloud storage, i.e., Storageas-a-Service (StaaS), instead of local storage has the potential of providing more flexibility in terms of such as scalability, fault tolerance, and availability. In this paper, we propose a generic approach to integrate StaaS with data pipelines, i.e., computation on an on-premise server or on a specific cloud, but integration with StaaS, and develop a ranking method for available storage options based on five key parameters: cost, proximity, network performance, the impact of server-side encryption, and user weights. The evaluation carried out demonstrates the effectiveness of the proposed approach in terms of data transfer performance and the feasibility of dynamic selection of a storage option based on four primary user scenarios.
  •  
8.
  • Khan, Akif Quddus, et al. (författare)
  • Smart Data Placement Using Storage-as-a-Service Model for Big Data Pipelines
  • 2023
  • Ingår i: Sensors. - : MDPI AG. - 1424-8220. ; 23:2
  • Tidskriftsartikel (refereegranskat)abstract
    • Big data pipelines are developed to process data characterized by one or more of the three big data features, commonly known as the three Vs (volume, velocity, and variety), through a series of steps (e.g., extract, transform, and move), making the ground work for the use of advanced analytics and ML/AI techniques. Computing continuum (i.e., cloud/fog/edge) allows access to virtually infinite amount of resources, where data pipelines could be executed at scale; however, the implementation of data pipelines on the continuum is a complex task that needs to take computing resources, data transmission channels, triggers, data transfer methods, integration of message queues, etc., into account. The task becomes even more challenging when data storage is considered as part of the data pipelines. Local storage is expensive, hard to maintain, and comes with several challenges (e.g., data availability, data security, and backup). The use of cloud storage, i.e., storage-as-a-service (StaaS), instead of local storage has the potential of providing more flexibility in terms of scalability, fault tolerance, and availability. In this article, we propose a generic approach to integrate StaaS with data pipelines, i.e., computation on an on-premise server or on a specific cloud, but integration with StaaS, and develop a ranking method for available storage options based on five key parameters: cost, proximity, network performance, server-side encryption, and user weights/preferences. The evaluation carried out demonstrates the effectiveness of the proposed approach in terms of data transfer performance, utility of the individual parameters, and feasibility of dynamic selection of a storage option based on four primary user scenarios.
  •  
9.
  • Khan, Akif Quddus, et al. (författare)
  • Towards Cloud Storage Tier Optimization with Rule-Based Classification
  • 2023
  • Ingår i: Service-Oriented and Cloud Computing. - : Springer Nature. ; , s. 205-216
  • Konferensbidrag (refereegranskat)abstract
    • Cloud storage adoption has increased over the years as more and more data has been produced with particularly high demand for fast processing and low latency. To meet the users’ demands and to provide a cost-effective solution, cloud service providers (CSPs) have offered tiered storage; however, keeping the data in one tier is not a cost-effective approach. Hence, several two-tiered approaches have been developed to classify storage objects into the most suitable tier. In this respect, this paper explores a rule-based classification approach to optimize cloud storage cost by migrating data between different storage tiers. Instead of two, four distinct storage tiers are considered, including premium, hot, cold, and archive. The viability and potential of the approach are demonstrated by comparing cost savings achieved when data was moved between tiers versus when it remained static. The results indicate that the proposed approach has the potential to significantly reduce cloud storage cost, thereby providing valuable insights for organizations seeking to optimize their cloud storage strategies. Finally, the limitations of the proposed approach are discussed along with the potential directions for future work, particularly the use of game theory to incorporate a feedback loop to extend and improve the proposed approach accordingly.
  •  
10.
  • Khan, Akif Quddus, et al. (författare)
  • Towards Graph-based Cloud Cost Modelling and Optimisation
  • 2023
  • Ingår i: Proceedings. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 1337-1342
  • Konferensbidrag (refereegranskat)abstract
    • Cloud computing has become an increasingly popular choice for businesses and individuals due to its flexibility, scalability, and convenience; however, the rising cost of cloud resources has become a significant concern for many. The pay-per-use model used in cloud computing means that costs can accumulate quickly, and the lack of visibility and control can result in unexpected expenses. The cost structure becomes even more complicated when dealing with hybrid or multi-cloud environments. For businesses, the cost of cloud computing can be a significant portion of their IT budget, and any savings can lead to better financial stability and competitiveness. In this respect, it is essential to manage cloud costs effectively. This requires a deep understanding of current resource utilization, forecasting future needs, and optimising resource utilization to control costs. To address this challenge, new tools and techniques are being developed to provide more visibility and control over cloud computing costs. In this respect, this paper explores a graph-based solution for modelling cost elements and cloud resources and potential ways to solve the resulting constraint problem of cost optimisation. We primarily consider utilization, cost, performance, and availability in this context. Such an approach will eventually help organizations make informed decisions about cloud resource placement and manage the costs of software applications and data workflows deployed in single, hybrid, or multi-cloud environments.
  •  
11.
  • Kimovski, Dragi, et al. (författare)
  • Beyond von neumann in the computing continuum : architectures, applications, and future directions
  • 2023
  • Ingår i: IEEE Internet Computing. - : IEEE COMPUTER SOC. - 1089-7801 .- 1941-0131. ; , s. 1-11
  • Tidskriftsartikel (refereegranskat)abstract
    • The article discusses the emerging non-von Neumann computer architectures and their integration in the computing continuum for supporting modern distributed applications, including artificial intelligence, big data, and scientific computing. It provides a detailed summary of the available and emerging non-von Neumann architectures, which range from power-efficient single-board accelerators to quantum and neuromorphic computers. Furthermore, it explores their potential benefits for revolutionizing data processing and analysis in various societal, science, and industry fields. The paper provides a detailed analysis of the most widely used class of distributed applications and discusses the difficulties in their execution over the computing continuum, including communication, interoperability, orchestration, and sustainability issues.
  •  
12.
  • Nikolov, Nikolay, et al. (författare)
  • Container-Based Data Pipelines on the Computing Continuum for Remote Patient Monitoring
  • 2023
  • Ingår i: Computer. - : Institute of Electrical and Electronics Engineers (IEEE). - 0018-9162 .- 1558-0814. ; 56:10, s. 40-48
  • Tidskriftsartikel (refereegranskat)abstract
    • The emerging concept of big data pipelines provides relevant solutions and is one of the main enablers of telemedicine. We present a data pipeline for remote patient monitoring and show a real-world example of how data pipelines help address the stringent requirements of telemedicine.
  •  
13.
  • Roman, Dumitru, et al. (författare)
  • Big Data Pipelines on the Computing Continuum : Tapping the Dark Data
  • 2022
  • Ingår i: Computer. - : Institute of Electrical and Electronics Engineers (IEEE). - 0018-9162 .- 1558-0814. ; 55:11, s. 74-84
  • Tidskriftsartikel (refereegranskat)abstract
    • The computing continuum enables new opportunities for managing big data pipelines concerning efficient management of heterogeneous and untrustworthy resources. We discuss the big data pipelines lifecycle on the computing continuum and its associated challenges, and we outline a future research agenda in this area.
  •  
14.
  •  
15.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-15 av 15

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy