SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L4X0:0280 7971 ;pers:(Shahmehri Nahid)"

Sökning: L4X0:0280 7971 > Shahmehri Nahid

  • Resultat 1-10 av 11
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ardi, Shanai, 1977- (författare)
  • A Model and Implementation of a Security plug-in for the Software Life Cycle
  • 2008
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Currently, security is frequently considered late in software life cycle. It is often bolted on late in development, or even during deployment or maintenance, through activities such as add-on security software and penetration-and-patch maintenance. Even if software developers aim to incorporate security into their products from the beginning of the software life cycle, they face an exhaustive amount of ad hoc unstructured information without any practical guidance on how and why this information should be used and what the costs and benefits of using it are. This is due to a lack of structured methods.In this thesis we present a model for secure software development and implementation of a security plug-in that deploys this model in software life cycle. The model is a structured unified process, named S3P (Sustainable Software Security Process) and is designed to be easily adaptable to any software development process. S3P provides the formalism required to identify the causes of vulnerabilities and the mitigation techniques that address these causes to prevent vulnerabilities. We present a prototype of the security plug-in implemented for the OpenUP/Basic development process in Eclipse Process Framework. We also present the results of the evaluation of this plug-in. The work in this thesis is a first step towards a general framework for introducing security into the software life cycle and to support software process improvements to prevent recurrence of software vulnerabilities.
  •  
2.
  • Duma, Claudiu (författare)
  • Security and Efficiency Tradeoffs in Multicast Group Key Management
  • 2003
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • An ever-increasing number of Internet applications, such as content and software distribution, distance learning, multimedia streaming, teleconferencing, and collaborative workspaces, need efficient and secure multicast communication. However, efficiency and security are competing requirements and balancing them to meet the application needs is still an open issue.In this thesis we study the efficiency versus security requirements tradeoffs in group key management for multicast communication. The efficiency is in terms of minimizing the group rekeying cost and the key storage cost, while security is in terms of achieving backward secrecy, forward secrecy, and resistance to collusion.We propose two new group key management schemes that balance the efficiency versus resistance to collusion. The first scheme is a flexible category-based scheme, and addresses applications where a user categorization can be done based on the user accessibility to the multicast channel. As shown by the evaluation, this scheme has a low rekeying cost and a low key storage cost for the controller, but, in certain cases, it requires a high key storage cost for the users. In an extension to the basic scheme we alleviate this latter problem.For applications where the user categorization is not feasible, we devise a cluster-based group key management. In this scheme the resistance to collusion is measured by an integer parameter. The communication and the storage requirements for the controller depend on this parameter too, and they decrease as the resistance to collusion is relaxed. The results of the analytical evaluation show that our scheme allows a fine-tuning of security versus efficiency requirements at runtime, which is not possible with the previous group key management schemes.
  •  
3.
  • Ellqvist, Tommy, 1980- (författare)
  • Supporting Scientific Collaboration through Workflows and Provenance
  • 2010
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Science is changing. Computers, fast communication, and  new technologies have created new ways of conducting research.  For  instance, researchers from different disciplines are processing and  analyzing scientific data that is increasing at an exponential rate.  This kind of research requires that the scientists have access to  tools that can handle huge amounts of data, enable access to vast  computational resources, and support the collaboration of large  teams of scientists. This thesis focuses on tools that help support  scientific collaboration.Workflows and provenance are two concepts that have proven useful in  supporting scientific collaboration.  Workflows provide a formal  specification of scientific experiments, and provenance offers a  model for documenting data and process dependencies.  Together, they  enable the creation of tools that can support collaboration through  the whole scientific life-cycle, from specification of experiments  to validation of results.  However, existing models for workflows  and provenance are often specific to particular tasks and tools.  This makes it hard to analyze the history of data that has been  generated over several application areas by different tools.  Moreover, workflow design is a time-consuming process and often  requires extensive knowledge of the tools involved and collaboration  with researchers with different expertise. This thesis addresses  these problems.Our first contribution is a study of the differences between two  approaches to interoperability between provenance models: direct  data conversion, and mediation. We perform a case study where we  integrate three different provenance models using the mediation  approach, and show the advantages compared to data conversion.  Our  second contribution serves to support workflow design by allowing  multiple users to concurrently design workflows. Current workflow  tools lack the ability for users to work simultaneously on the same  workflow.  We propose a method that uses the provenance of workflow  evolution to enable real-time collaborative design of workflows.  Our third contribution considers supporting workflow design by  reusing existing workflows. Workflow collections for reuse are  available, but more efficient methods for generating summaries of  search results are still needed. We explore new summarization  strategies that considers the workflow structure.
  •  
4.
  • Jakoniené, Vaida, 1974- (författare)
  • A study in integrating multiple biological data sources
  • 2005
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Life scientists often have to retrieve data from multiple biological data sources to solve their research problems. Although many data sources are available, they vary in content, data format, and access methods, which often vastly complicates the data retrieval process. The user must decide which data sources to access and in which order, how to retrieve the data and how to combine the results - in short, the task of retrieving data requires a great deal of effort and expertise on the part of the user.Information integration systems aim to alleviate these problems by providing a uniform (or even integrated) interface to biological data sources. The information integration systems currently available for biological data sources use traditional integration approaches. However, biological data and data sources have unique properties which introduce new challenges, requiring development of new solutions and approaches.This thesis is part of the BioTrifu project, which explores approaches to integrating multiple biological data sources. First, the thesis describes properties of biological data sources and existing systems that enable integrated access to them. Based on the study, requirements for systems integrating biological data sources are formulated and the challenges involved in developing such systems are discussed. Then, the thesis presents a query language and a high-level architecture for the BioTrifu system that meet these requirements. An approach to generating a query plan in the presence of alternative data sources and ways to integrate the data is then developed. Finally, the design and implementation of a prototype for the BioTrifu system are presented.
  •  
5.
  • Kamkar, Mariam, 1952-, et al. (författare)
  • Affect-chaining in program flow analysis applied to queries of programs
  • 1987
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis presents how program flow analysis methods can be used to help the programmer understand data flow and data dependencies in programs. The design and implementation of an interactive query tool based on static analysis methods is presented. These methods include basic analysis and cross-reference analysis, intraprocedural data flow analysis, interprocedural data flow analysis and affect-chaining analysis.The novel concept of affect-chaining is introduced, which is the process of analysing flow of data between variables in a program. We present forward- and backward- affect-chaining, and also algorithms to compute these quantities. Also, a theorem about affect-chaining is proved.We have found that data flow problems appropriate for query applications often need to keep track of paths associated with data flows. By contrast, flow analysis in conventional compiler optimization
  •  
6.
  • Karresand, Martin, 1970- (författare)
  • Completing the Picture : Fragments and Back Again
  • 2008
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Better methods and tools are needed in the fight against child pornography. This thesis presents a method for file type categorisation of unknown data fragments, a method for reassembly of JPEG fragments, and the requirements put on an artificial JPEG header for viewing reassembled images. To enable empirical evaluation of the methods a number of tools based on the methods have been implemented.The file type categorisation method identifies JPEG fragments with a detection rate of 100% and a false positives rate of 0.1%. The method uses three algorithms, Byte Frequency Distribution (BFD), Rate of Change (RoC), and 2-grams. The algorithms are designed for different situations, depending on the requirements at hand.The reconnection method correctly reconnects 97% of a Restart (RST) marker enabled JPEG image, fragmented into 4 KiB large pieces. When dealing with fragments from several images at once, the method is able to correctly connect 70% of the fragments at the first iteration.Two parameters in a JPEG header are crucial to the quality of the image; the size of the image and the sampling factor (actually factors) of the image. The size can be found using brute force and the sampling factors only take on three different values. Hence it is possible to use an artificial JPEG header to view full of parts of an image. The only requirement is that the fragments contain RST markers.The results of the evaluations of the methods show that it is possible to find, reassemble, and view JPEG image fragments with high certainty.
  •  
7.
  • Lin, Han-Hsuan, 1980- (författare)
  • Secure and scalable E-service software delivery
  • 2001
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Due to the complexity of software and end-user operating environments, software management in general is not an easy task for end-users. In the context of e-service, what end-users buy is the service package. Generally speaking, they should not have to be concerned with how to get the required software and how to make it work properly on their own sites. On the other hand, service providers would not like to have their service-related software managed in a non-professional way, which might cause problems when providing services.E-service software delivery is the starting point in e-service software management. It is the functional foundation for performing further software management tasks, e.g., installation, configuration, activation, and so on. This thesis concentrates on how to deliver e-service software to a large number of geographically distributed end-users. Special emphasis is placed on the issues of efficiency (in terms of total transmission time and consumed resources), scalability (in terms of the number of end-users), and security (in terms of confidentiality and integrity). In the thesis, we propose an agent-based architectural model for e-service software delivery, aiming at automating involved tasks, such as registration, key management, and recipient status report collection. Based on the model, we develop a multicast software delivery system, which provides a secure and scalable solution to distributing software over publicly accessible networks. By supplying end-users with site information examination, the system builds a bridge towards further software management tasks. We also present a novel strategy for scalable multicast session key management in the context of software delivery, which can efficiently handle a dynamic reduction in group membership of up to 50% of the total. An evaluation is provided from the perspective of resource consumption due to security management activities. 
  •  
8.
  • Liu, Qiang (författare)
  • Dealing with Missing Mappings and Structure in a Network of Ontologies
  • 2011
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • With the popularity of the World Wide Web, a large amount of data is generated and made available through the Internet everyday. To integrate and query this huge amount of heterogeneous data, the vision of Semantic Web has been recognized as a possible solution. One key technology for the Semantic Web is ontologies. Many ontologies have been developed in recent years. Meanwhile, due to the demand of applications using multiple ontologies,  mappings between entities of these ontologies are generated as well, which leads to the generation of ontology networks consisting of ontologies and mappings between these ontologies. However, neither developing ontologies nor finding mappings between ontologies is an easy task. It may happen that the ontologies are not consistent or complete, or the mappings between these ontologies are not correct or complete, or the resulting ontology network is not consistent. This may lead to problems when they are used in semantically-enabled applications.In this thesis, we address two issues relevant to the quality of the mappings and the structure in the ontology network. The first issue deals with the missing mappings between networked ontologies. Assuming existing mappings between ontologies are correct, we investigate whether and how to use these existing mappings, to find more mappings between ontologies. We propose and test several strategies of using the given correct mappings to align ontologies. The second issue deals with the missing structure, in particular missing is-a relations, in networked ontologies. Based on the assumption that missing is-a relations are a kind of modeling defects, we propose an ontology debugging approach to tackle this issue. We develop an algorithm for detecting missing is-a relations in ontologies, as well as algorithms which assist the user in repairing by generating and recommending possible ways of repairing and executing the repairing. Based on this approach, we develop a system and test its use and performance.
  •  
9.
  • Tan, He, 1977- (författare)
  • Aligning and Merging Biomedical Ontologies
  • 2006
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Due to the explosion of the amount of biomedical data, knowledge and tools that are often publicly available over the Web, a number of difficulties are experienced by biomedical researchers. For instance, it is difficult to find, retrieve and integrate information that is relevant to their research tasks. Ontologies and the vision of a Semantic Web for life sciences alleviate these difficulties. In recent years many biomedical ontologies have been developed and many of these ontologies contain overlapping information. To be able to use multiple ontologies they have to be aligned or merged. A number of systems have been developed for aligning and merging ontologies and various alignment strategies are used in these systems. However, there are no general methods to support building such tools, and there exist very few evaluations of these strategies. In this thesis we give an overview of the existing systems. We propose a general framework for aligning and merging ontologies. Most existing systems can be seen as instantiations of this framework. Further, we develop SAMBO (System for Aligning and Merging Biomedical Ontologies) according to this framework. We implement different alignment strategies and their combinations, and evaluate them in terms of quality and processing time within SAMBO. We also compare SAMBO with two other systems. The work in this thesis is a first step towards a general framework that can be used for comparative evaluations of alignment strategies and their combinations.
  •  
10.
  • Vapen, Anna, 1983- (författare)
  • Contributions to Web Authentication for Untrusted Computers
  • 2011
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Authentication methods offer varying levels of security. Methods with one-time credentials generated by dedicated hardware tokens can reach a high level of security, whereas password-based authentication methods have a low level of security since passwords can be eavesdropped and stolen by an attacker. Password-based methods are dominant in web authentication since they are both easy to implement and easy to use. Dedicated hardware, on the other hand, is not always available to the user, usually requires additional equipment and may be more complex to use than password-based authentication.Different services and applications on the web have different requirements for the security of authentication.  Therefore, it is necessary for designers of authentication solutions to address this need for a range of security levels. Another concern is mobile users authenticating from unknown, and therefore untrusted, computers. This in turn raises issues of availability, since users need secure authentication to be available, regardless of where they authenticate or which computer they use.We propose a method for evaluation and design of web authentication solutions that takes into account a number of often overlooked design factors, i.e. availability, usability and economic aspects. Our proposed method uses the concept of security levels from the Electronic Authentication Guideline, provided by NIST.We focus on the use of handheld devices, especially mobile phones, as a flexible, multi-purpose (i.e. non-dedicated) hardware device for web authentication. Mobile phones offer unique advantages for secure authentication, as they are small, flexible and portable, and provide multiple data transfer channels. Phone designs, however, vary and the choice of channels and authentication methods will influence the security level of authentication. It is not trivial to maintain a consistent overview of the strengths and weaknesses of the available alternatives. Our evaluation and design method provides this overview and can help developers and users to compare and choose authentication solutions.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 11

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy