SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Haridi S.) "

Sökning: WFRF:(Haridi S.)

  • Resultat 1-10 av 15
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Alsayfi, Majed S., et al. (författare)
  • Big Data in Vehicular Cloud Computing : Review, Taxonomy, and Security Challenges
  • 2022
  • Ingår i: ELEKTRONIKA IR ELEKTROTECHNIKA. - : Kaunas University of Technology (KTU). - 1392-1215 .- 2029-5731. ; 28:2, s. 59-71
  • Forskningsöversikt (refereegranskat)abstract
    • Modern vehicles equipped with various smart sensors have become a means of transportation and have become a means of collecting, creating, computing, processing, and transferring data while traveling through modern and rural cities. A traditional vehicular ad hoc network (VANET) cannot handle the enormous and complex data that are collected by modern vehicle sensors (e.g., cameras, lidar, and global positioning systems (GPS)) because they require rapid processing, analysis, management, storage, and uploading to trusted national authorities. Furthermore, the integrated VANET with cloud computing presents a new concept, vehicular cloud computing (VCC), which overcomes the limitations of VANET, brings new services and applications to vehicular networks, and generates a massive amount of data compared to the data collected by individual vehicles alone. Therefore, this study explored the importance of big data in VCC. First, we provide an overview of traditional vehicular networks and their limitations. Then we investigate the relationship between VCC and big data, fundamentally focusing on how VCC can generate, transmit, store, upload, and process big data to share it among vehicles on the road. Subsequently, a new taxonomy of big data in VCC was presented. Finally, the security challenges in big data-based VCCs are discussed.
  •  
2.
  • Alsayfi, Majed S., et al. (författare)
  • Securing Real-Time Video Surveillance Data in Vehicular Cloud Computing : A Survey
  • 2022
  • Ingår i: IEEE Access. - : Institute of Electrical and Electronics Engineers (IEEE). - 2169-3536. ; 10, s. 51525-51547
  • Tidskriftsartikel (refereegranskat)abstract
    • Vehicular ad hoc networks (VANETs) have received a great amount of interest, especially in wireless communications technology. In VANETs, vehicles are equipped with various intelligent sensors that can collect real-time data from inside and from surrounding vehicles. These real-time data require powerful computation, processing, and storage. However, VANETs cannot manage these real-time data because of the limited storage capacity in on board unit (OBU). To address this limitation, a new concept is proposed in which a VANET is integrated with cloud computing to form vehicular cloud computing (VCC) technology. VCC can manage real-time services, such as real-time video surveillance data that are used for monitoring critical events on the road. These real-time video surveillance data include highly sensitive data that should be protected against intruders in the networks because any manipulation, alteration, or sniffing of data will affect a driver's life by causing improper decision-making. The security and privacy of real-time video surveillance data are major challenges in VCC. Therefore, this study reviewed the importance of the security and privacy of real-time video data in VCC. First, we provide an overview of VANETs and their limitations. Second, we provide a state-of-the-art taxonomy for real-time video data in VCC. Then, the importance of real-time video surveillance data in both fifth generation (5G), and sixth generation (6G) networks is presented. Finally, the challenges and open issues of real-time video data in VCC are discussed.
  •  
3.
  • Koubarakis, M., et al. (författare)
  • From copernicus big data to extreme earth analytics
  • 2019
  • Ingår i: Advances in Database Technology - EDBT. - : OpenProceedings. - 9783893180813 ; , s. 690-693
  • Konferensbidrag (refereegranskat)abstract
    • Copernicus is the European programme for monitoring the Earth. It consists of a set of systems that collect data from satellites and in-situ sensors, process this data and provide users with reliable and up-to-date information on a range of environmental and security issues. The data and information processed and disseminated puts Copernicus at the forefront of the big data paradigm, giving rise to all relevant challenges, the so-called 5 Vs: volume, velocity, variety, veracity and value. In this short paper, we discuss the challenges of extracting information and knowledge from huge archives of Copernicus data. We propose to achieve this by scale-out distributed deep learning techniques that run on very big clusters offering virtual machines and GPUs. We also discuss the challenges of achieving scalability in the management of the extreme volumes of information and knowledge extracted from Copernicus data. The envisioned scientific and technical work will be carried out in the context of the H2020 project ExtremeEarth which starts in January 2019.
  •  
4.
  • Eassa, Fathy Elbouraey, et al. (författare)
  • ACC_TEST : Hybrid Testing Approach for OpenACC-Based Programs
  • 2020
  • Ingår i: IEEE Access. - : Institute of Electrical and Electronics Engineers (IEEE). - 2169-3536. ; 8, s. 80358-80368
  • Tidskriftsartikel (refereegranskat)abstract
    • In recent years, OpenACC has been used in many supercomputers and attracted many non-computer science specialists for parallelizing their programs in different scientific fields, including weather forecasting and simulations. OpenACC is a high-level programming model that supports parallelism and is easy to learn to use by adding high-level directives without considering too many low-level details. Testing parallel programs is a difficult task, made even harder if using programming models, especially if they have been badly programmed. If so, it will be challenging to detect their runtime errors as well as their causes, whether the error is from the user source code or from the programming model directives. Even when these errors are detected and the source code modified, we cannot guarantee that the errors have been corrected or are still hidden. There are many tools and studies that have investigated several programming models for identifying and detecting related errors. However, OpenACC has not been targeted clearly in any testing tool or previous studies, even though OpenACC has many benefits and features that could lead to increasing use in achieving parallel systems with less effort. In this paper, we enhance ACC_TEST with the ability to test OpenACC-based programs and detect runtime errors by using hybrid-testing techniques that enhance error coverage occurring in OpenACC as well as overheads and testing time.
  •  
5.
  • Ismail, Mahmoud, et al. (författare)
  • Scaling HDFS to more than 1 million operations per second with HopsFS
  • 2017
  • Ingår i: Proceedings - 2017 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGRID 2017. - : Institute of Electrical and Electronics Engineers Inc.. - 9781509066100 ; , s. 683-688
  • Konferensbidrag (refereegranskat)abstract
    • HopsFS is an open-source, next generation distribution of the Apache Hadoop Distributed File System(HDFS) that replaces the main scalability bottleneck in HDFS, single node in-memory metadata service, with a no-sharedstate distributed system built on a NewSQL database. By removing the metadata bottleneck in Apache HDFS, HopsFS enables significantly larger cluster sizes, more than an order of magnitude higher throughput, and significantly lower clientlatencies for large clusters. In this paper, we detail the techniques and optimizations that enable HopsFS to surpass 1 million file system operations per second-at least 16 times higher throughput than HDFS. In particular, we discuss how we exploit recent high performance features from NewSQL databases, such as application defined partitioning, partition-pruned index scans, and distribution aware transactions. Together with more traditional techniques, such as batching and write-Ahead caches, we show how many incremental optimizations have enabled a revolution in distributed hierarchical file system performance.
  •  
6.
  • Kavassalis, P., et al. (författare)
  • What makes a Web site popular?
  • 2004
  • Ingår i: Communications of the ACM. - : Association for Computing Machinery (ACM). - 0001-0782 .- 1557-7317. ; 47:2, s. 51-55
  • Tidskriftsartikel (refereegranskat)abstract
    • Several factors which affect the popularity of the web sites are discussed. A computational model involving two superimposed interaction networks with random connections is developed which links the sites as well as organizes social interactions among internet users. It is suggested that understanding of information flows and connection networks surroundings is necessary to ensure a constant increase in user interest, loyalty and market share. It is also recommended that E-marketers should investigate and leverage the long-term rampifications of information network structures for predicting the behavior of internet users towards their organizations' web sites.
  •  
7.
  •  
8.
  • Ormenisan, Alexandru-Adrian, et al. (författare)
  • Time travel and provenance for machine learning pipelines
  • 2020
  • Ingår i: OpML 2020 - 2020 USENIX Conference on Operational Machine Learning. - : USENIX Association.
  • Konferensbidrag (refereegranskat)abstract
    • Machine learning pipelines have become the defacto paradigm for productionizing machine learning applications as they clearly abstract the processing steps involved in transforming raw data into engineered features that are then used to train models. In this paper, we use a bottom-up method for capturing provenance information regarding the processing steps and artifacts produced in ML pipelines. Our approach is based on replacing traditional intrusive hooks in application code (to capture ML pipeline events) with standardized change-data-capture support in the systems involved in ML pipelines: the distributed file system, feature store, resource manager, and applications themselves. In particular, we leverage data versioning and time-travel capabilities in our feature store to show how provenance can enable model reproducibility and debugging.
  •  
9.
  • Reale, R., et al. (författare)
  • DTL : Dynamic transport library for peer-to-peer applications
  • 2012
  • Ingår i: Distributed Computing And Networking. - Berlin, Heidelberg : Springer Berlin/Heidelberg. - 9783642259586 ; , s. 428-442
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents the design and implementation of the Dynamic Transport Library (DTL), a UDP-based reliable transport library, initially designed for - but not limited to - peer-to-peer applications. DTL combines many features not simultaneously offered by any other transport library including: i) Wide scope of congestion control levels starting from less-than-best-effort to high-priority, ii) Prioritization of traffic relative to other non-DTL traffic, iii) Prioritization of traffic between DTL connections, iv) NAT-friendliness, v) Portability, and vi) Application level implementation. Moreover, DTL has a novel feature, namely, the ability to change the level of aggressiveness of a certain connection at run-time. All the features of the DTL were validated using a controlled environment as well as the Planet Lab testbed.
  •  
10.
  • Roverso, Roberto, et al. (författare)
  • Peer2View : A peer-to-peer HTTP-live streaming platform
  • 2012
  • Ingår i: 2012 IEEE 12th International Conference on Peer-to-Peer Computing, P2P 2012. - : IEEE. - 9781467328623 ; , s. 65-66
  • Konferensbidrag (refereegranskat)abstract
    • Peer2View is a commercial peer-to-peer live video streaming (P2PLS) system. The novelty of Peer2View is threefold: i) It is the first P2PLS platform to support HTTP as transport protocol for live content, ii) The system supports both single and multi-bitrate streaming modes of operation, and iii) It makes use of an application-layer dynamic congestion control to manage priorities of transfers. Peer2View goals are to achieve substantial savings towards the source of the stream while providing the same quality of user experience of a CDN.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 15

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy