SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Arlos Patrik) "

Search: WFRF:(Arlos Patrik)

  • Result 1-10 of 40
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Ahmadi Mehri, Vida, et al. (author)
  • Automated Context-Aware Vulnerability Risk Management for Patch Prioritization
  • 2022
  • In: Electronics. - : MDPI. - 2079-9292. ; 11:21
  • Journal article (peer-reviewed)abstract
    • The information-security landscape continuously evolves by discovering new vulnerabilities daily and sophisticated exploit tools. Vulnerability risk management (VRM) is the most crucial cyber defense to eliminate attack surfaces in IT environments. VRM is a cyclical practice of identifying, classifying, evaluating, and remediating vulnerabilities. The evaluation stage of VRM is neither automated nor cost-effective, as it demands great manual administrative efforts to prioritize the patch. Therefore, there is an urgent need to improve the VRM procedure by automating the entire VRM cycle in the context of a given organization. The authors propose automated context-aware VRM (ACVRM), to address the above challenges. This study defines the criteria to consider in the evaluation stage of ACVRM to prioritize the patching. Moreover, patch prioritization is customized in an organization’s context by allowing the organization to select the vulnerability management mode and weigh the selected criteria. Specifically, this study considers four vulnerability evaluation cases: (i) evaluation criteria are weighted homogeneously; (ii) attack complexity and availability are not considered important criteria; (iii) the security score is the only important criteria considered; and (iv) criteria are weighted based on the organization’s risk appetite. The result verifies the proposed solution’s efficiency compared with the Rudder vulnerability management tool (CVE-plugin). While Rudder produces a ranking independent from the scenario, ACVRM can sort vulnerabilities according to the organization’s criteria and context. Moreover, while Rudder randomly sorts vulnerabilities with the same patch score, ACVRM sorts them according to their age, giving a higher security score to older publicly known vulnerabilities. © 2022 by the authors.
  •  
2.
  • Ahmadi Mehri, Vida, et al. (author)
  • Automated Patch Management : An Empirical Evaluation Study
  • 2023
  • In: Proceedings of the 2023 IEEE International Conference on Cyber Security and Resilience, CSR 2023. - : IEEE. - 9798350311709 ; , s. 321-328
  • Conference paper (peer-reviewed)abstract
    • Vulnerability patch management is one of IT organizations' most complex issues due to the increasing number of publicly known vulnerabilities and explicit patch deadlines for compliance. Patch management requires human involvement in testing, deploying, and verifying the patch and its potential side effects. Hence, there is a need to automate the patch management procedure to keep the patch deadline with a limited number of available experts. This study proposed and implemented an automated patch management procedure to address mentioned challenges. The method also includes logic to automatically handle errors that might occur in patch deployment and verification. Moreover, the authors added an automated review step before patch management to adjust the patch prioritization list if multiple cumulative patches or dependencies are detected. The result indicated that our method reduced the need for human intervention, increased the ratio of successfully patched vulnerabilities, and decreased the execution time of vulnerability risk management.
  •  
3.
  • Ahmadi Mehri, Vida, et al. (author)
  • Normalization Framework for Vulnerability Risk Management in Cloud
  • 2021
  • In: Proceedings - 2021 International Conference on Future Internet of Things and Cloud, FiCloud 2021. - : IEEE. ; , s. 99-106
  • Conference paper (peer-reviewed)abstract
    • Vulnerability Risk Management (VRM) is a critical element in cloud security that directly impacts cloud providers’ security assurance levels. Today, VRM is a challenging process because of the dramatic increase of known vulnerabilities (+26% in the last five years), and because it is even more dependent on the organization’s context. Moreover, the vulnerability’s severity score depends on the Vulnerability Database (VD) selected as a reference in VRM. All these factors introduce a new challenge for security specialists in evaluating and patching the vulnerabilities. This study provides a framework to improve the classification and evaluation phases in vulnerability risk management while using multiple vulnerability databases as a reference. Our solution normalizes the severity score of each vulnerability based on the selected security assurance level. The results of our study highlighted the role of the vulnerability databases in patch prioritization, showing the advantage of using multiple VDs.
  •  
4.
  • Ahmadi Mehri, Vida, et al. (author)
  • Normalization of Severity Rating for Automated Context-aware Vulnerability Risk Management
  • 2020
  • In: Proceedings - 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion, ACSOS-C 2020. - : Institute of Electrical and Electronics Engineers (IEEE). - 9781728184142 ; , s. 200-205
  • Conference paper (peer-reviewed)abstract
    • In the last three years, the unprecedented increase in discovered vulnerabilities ranked with critical and high severity raise new challenges in Vulnerability Risk Management (VRM). Indeed, identifying, analyzing and remediating this high rate of vulnerabilities is labour intensive, especially for enterprises dealing with complex computing infrastructures such as Infrastructure-as-a-Service providers. Hence there is a demand for new criteria to prioritize vulnerabilities remediation and new automated/autonomic approaches to VRM.In this paper, we address the above challenge proposing an Automated Context-aware Vulnerability Risk Management (AC- VRM) methodology that aims: to reduce the labour intensive tasks of security experts; to prioritize vulnerability remediation on the basis of the organization context rather than risk severity only. The proposed solution considers multiple vulnerabilities databases to have a great coverage on known vulnerabilities and to determine the vulnerability rank. After the description of the new VRM methodology, we focus on the problem of obtaining a single vulnerability score by normalization and fusion of ranks obtained from multiple vulnerabilities databases. Our solution is a parametric normalization that accounts for organization needs/specifications.
  •  
5.
  • Ahmadi Mehri, Vida (author)
  • Towards Automated Context-aware Vulnerability Risk Management
  • 2023
  • Doctoral thesis (other academic/artistic)abstract
    • The information security landscape continually evolves with increasing publicly known vulnerabilities (e.g., 25064 new vulnerabilities in 2022). Vulnerabilities play a prominent role in all types of security related attacks, including ransomware and data breaches. Vulnerability Risk Management (VRM) is an essential cyber defense mechanism to eliminate or reduce attack surfaces in information technology. VRM is a continuous procedure of identification, classification, evaluation, and remediation of vulnerabilities. The traditional VRM procedure is time-consuming as classification, evaluation, and remediation require skills and knowledge of specific computer systems, software, network, and security policies. Activities requiring human input slow down the VRM process, increasing the risk of exploiting a vulnerability.The thesis introduces the Automated Context-aware Vulnerability Risk Management (ACVRM) methodology to improve VRM procedures by automating the entire VRM cycle and reducing the procedure time and experts' intervention. ACVRM focuses on the challenging stages (i.e., classification, evaluation, and remediation) of VRM to support security experts in promptly prioritizing and patching the vulnerabilities. ACVRM concept is designed and implemented in a test environment for proof of concept. The efficiency of patch prioritization by ACVRM compared against a commercial vulnerability management tool (i.e., Rudder). ACVRM prioritized the vulnerability based on the patch score (i.e., the numeric representation of the vulnerability characteristic and the risk), the historical data, and dependencies. The experiments indicate that ACVRM could rank the vulnerabilities in the organization's context by weighting the criteria used in patch score calculation. The automated patch deployment is implemented with three use cases to investigate the impact of learning from historical events and dependencies on the success rate of the patch and human intervention. Our finding shows that ACVRM reduced the need for human actions, increased the ratio of successfully patched vulnerabilities, and decreased the cycle time of VRM process.
  •  
6.
  • Arlos, Patrik, et al. (author)
  • A Distributed Passive Measurement Infrastructure
  • 2005
  • Conference paper (peer-reviewed)abstract
    • In this paper we describe a distributed passive measurement infrastructure. Its goals are to reduce the cost and configuration effort per measurement. The infrastructure is scalable with regards to link speeds and measurement locations. A prototype is currently deployed at our university and a demo is online at http://inga.its.bth.se/projects/dpmi. The infrastructure differentiates between measurements and the analysis of measurements, this way the actual measurement equipment can focus on the practical issues of packet measurements. By using a modular approach the infrastructure can handle many different capturing devices. The infrastructure can also deal with the security and privacy aspects that might arise during measurements.
  •  
7.
  • Arlos, Patrik, et al. (author)
  • A Method to Estimate the Timestamp Accuracy of Measurement Hardware and Software Tools
  • 2007
  • Conference paper (peer-reviewed)abstract
    • Due to the complex diversity of contemporary Internet applications, computer network measurements have gained considerable interest during the recent years. Since they supply network research, development and operations with data important for network traffic modelling, performance and trend analysis etc., the quality of these measurements affect the results of these activities and thus the perception of the network and its services. One major source of error is the timestamp accuracy obtained from measurement hardware and software. On this background, we present a method that can estimate the timestamp accuracy obtained from measurement hardware and software. The method is used to evaluate the timestamp accuracy of some commonly used measurement hardware and software. Results are presented for the Agilent J6800/J6830A measurement system, the Endace DAG 3.5E card, the Packet Capture Library (PCAP) either with PF_RING or Memory Mapping, and a RAW socket using either the kernel PDU timestamp (ioctl) or the CPU counter (TSC) to obtain timestamps.
  •  
8.
  • Arlos, Patrik, et al. (author)
  • Accuracy Evaluation of Ping and J-OWAMP
  • 2006
  • Conference paper (peer-reviewed)abstract
    • Due to the complex diversity of contemporary Internet services, computer network measurements have gained considerable interest during the recent years. Computer network measurements supply network research, development and operations with data important for network traffic modelling, performance and trend analysis etc. Hence, the quality of these measurements affects the results of these activities and thus the perception of the network and its services. Active measurements are performed by injecting traffic into a network and observing the treatment that this traffic receives. Usually, active measurements are perfomed by writing special applications that act at sender and receiver. These applications are usually executed as user processes. This causes some concern since this can have serious implications on the obtained results, as they are affected by the scheduling mechanisms in the operating system. In this paper, we evaluate the accuracy of two active measurement tools, ping and J-OWAMP, by using high accuracy passive measurements. Our results show that ping is quite accurate, to 0.1~ms for Linux and 1~ms for Windows~XP, while J-OWAMP has a discrepancy of 25~ms plus serious time synchronization problems.
  •  
9.
  • Arlos, Patrik (author)
  • Application Level Measurement
  • 2011
  • In: Network Performance Engineering. - Berlin / Heidelberg : Springer. - 9783642027413 ; , s. 14-36
  • Book chapter (other academic/artistic)abstract
    • In some cases, application-level measurements can be the only way for an application to get an understanding about the performance offered by the underlying network(s). It can also be that an application-level measurement is the only practical solution to verify the availability of a particular service. Hence, as more and more applications perform measurements of various networks; be that fixed or mobile, it is crucial to understand the context in which the application level measurements operate their capabilities and limitations. To this end in this paper we discuss some of the fundamentals of computer network performance measurements and in particular the key aspects to consider when using application level measurements to estimate network performance properties.
  •  
10.
  • Arlos, Patrik, et al. (author)
  • Evaluation of Protocol Treatment in 3G Networks
  • 2011
  • Conference paper (peer-reviewed)abstract
    • In this work, we present a systematic study of how the traffic of different transport protocols (UDP, TCP and ICMP) is treated, in three operational Swedish 3G networks. This is done by studying the impact that protocol and packet size have on the one-way-delay (OWD) across the networks. We do this using a special method that allows us to calculate the exact OWD, without having to face the usual clock synchronization problems that are normally associated with OWD calculations. From our results we see that all three protocols are treated similarly by all three operators, when we consider packet sizes that are smaller than 250~bytes and larger than 1100~bytes. We also show that larger packet sizes are given preferential treatment, with both smaller median OWD as well as a smaller standard deviation. It is also clear that, ICMP is given a better performance compared to TCP and UDP.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 40
Type of publication
conference paper (28)
journal article (6)
doctoral thesis (4)
reports (1)
book chapter (1)
Type of content
peer-reviewed (33)
other academic/artistic (7)
Author/Editor
Arlos, Patrik (37)
Fiedler, Markus (27)
Shaikh, Junaid (8)
Collange, Denis (6)
Casalicchio, Emilian ... (5)
Ahmadi Mehri, Vida (5)
show more...
Wac, Katarzyna (3)
Chu, Thi My Chinh (3)
Zepernick, Hans-Jürg ... (3)
Tutschku, Kurt (2)
Arlos, Patrik, Dr. (2)
Paladi, Nicolae (2)
Binzenhöfer, Andreas (2)
Graben, Björn auf de ... (2)
Phan, Hoc (2)
Bults, Richard (2)
Hlavacs, Helmut (2)
Hackbarth, Klaus (2)
Lundberg, Lars (1)
Abghari, Shahrooz (1)
Boeva, Veselka, Prof ... (1)
Grahn, Håkan (1)
Ickin, Selim (1)
Isaksson, Lennart (1)
Zepernick, Hans-Juer ... (1)
Tutschku, Kurt, 1966 ... (1)
Casalicchio, Emilian ... (1)
Axelsson, Stefan, Pr ... (1)
Pettersson, Mats (1)
Cheddad, Abbas (1)
Hu, Yan, 1985- (1)
Goswami, Prashant, 1 ... (1)
Lundberg, Lars, 1962 ... (1)
Nilsson, Arne A. (1)
Kommalapati, Ravicha ... (1)
Mendes, Emilia (1)
Chevul, Stefan (1)
Sundstedt, Veronica, ... (1)
Mahdi, Hajji (1)
Blouin, Francois (1)
Nottehed, Hans (1)
Gonsalves, Timothy A ... (1)
Bhardwaj, Anuraag (1)
Garro, Valeria, 1983 ... (1)
Temiz, Canberk (1)
Mkocha, Khadija (1)
Junaid, Junaid (1)
Fiedler, Markus, Pro ... (1)
Undheim, Astrid, Dr (1)
Klockar, Annika, 196 ... (1)
show less...
University
Blekinge Institute of Technology (39)
Lund University (2)
RISE (1)
Karlstad University (1)
Language
English (40)
Research subject (UKÄ/SCB)
Engineering and Technology (32)
Natural sciences (28)
Social Sciences (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view