SwePub
Sök i SwePub databas

  Extended search

Boolean operators must be entered wtih CAPITAL LETTERS

Träfflista för sökning "hsv:(NATURAL SCIENCES) hsv:(Computer and Information Sciences) hsv:(Software Engineering) srt2:(2010-2014)"

Search: hsv:(NATURAL SCIENCES) hsv:(Computer and Information Sciences) hsv:(Software Engineering) > (2010-2014)

  • Result 1-25 of 951
Sort/group result
   
EnumerationReferenceCoverFind
1.
  •  
2.
  • Lu, Zhihan, et al. (author)
  • Multimodal Hand and Foot Gesture Interaction for Handheld Devices
  • 2014
  • In: ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP). - : Association for Computing Machinery (ACM). - 1551-6857 .- 1551-6865. ; 11:1
  • Journal article (peer-reviewed)abstract
    • We present a hand-and-foot-based multimodal interaction approach for handheld devices. Our method combines input modalities (i.e., hand and foot) and provides a coordinated output to both modalities along with audio and video. Human foot gesture is detected and tracked using contour-based template detection (CTD) and Tracking-Learning-Detection (TLD) algorithm. 3D foot pose is estimated from passive homography matrix of the camera. 3D stereoscopic and vibrotactile are used to enhance the immersive feeling. We developed a multimodal football game based on the multimodal approach as a proof-of-concept. We confirm our systems user satisfaction through a user study.
  •  
3.
  • Paçacı, Görkem, et al. (author)
  • Towards a visual compositional relational programming methodology
  • 2012
  • In: Diagrams 2012. ; , s. 17-19
  • Conference paper (peer-reviewed)abstract
    • We present a new visual programming method, based on Combilog, a compositional relational programming language. In this paper we focus on the compositional aspect of Combilog, the make operator, visually implementing it via a modification of Higraph diagrams, in an attempt to overcome the obscurity and complexity in the textual representation of this operator.
  •  
4.
  •  
5.
  • Berntsson Svensson, Richard, et al. (author)
  • Prioritization of quality requirements : State of practice in eleven companies
  • 2011
  • In: 2011 IEEE 19th International Requirements Engineering Conference, RE 2011; Trento; 29 August 2011 through 2 September 2011. - Trento : IEEE. - 9781457709234 ; , s. 69-78, s. 69-78
  • Conference paper (peer-reviewed)abstract
    • Requirements prioritization is recognized as an important but challenging activity in software product development. For a product to be successful, it is crucial to find the right balance among competing quality requirements. Although literature offers many methods for requirements prioritization, the research on prioritization of quality requirements is limited. This study identifies how quality requirements are prioritized in practice at 11 successful companies developing software intensive systems. We found that ad-hoc prioritization and priority grouping of requirements are the dominant methods for prioritizing quality requirements. The results also show that it is common to use customer input as criteria for prioritization but absence of any criteria was also common. The results suggests that quality requirements by default have a lower priority than functional requirements, and that they only get attention in the prioritizing process if decision-makers are dedicated to invest specific time and resources on QR prioritization. The results of this study may help future research on quality requirements to focus investigations on industry-relevant issues.
  •  
6.
  • Biere, Armin, et al. (author)
  • SmacC: A Retargetable Symbolic Execution Engine
  • 2013
  • In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Cham : Springer International Publishing. - 1611-3349 .- 0302-9743. - 9783319024431 ; LNCS 8172, s. 482-486
  • Conference paper (peer-reviewed)abstract
    • SmacC is a symbolic execution engine for C programs. It can be used for program verification, bounded model checking and generating SMT benchmarks. More recently we also successfully applied SmacC for high-level timing analysis of programs to infer exact loop bounds and safe over-approximations. SmacC uses the logic for bit-vectors with arrays to construct a bit-precise memory-model of a program for path-wise exploration.
  •  
7.
  • Biere, Armin, et al. (author)
  • The Auspicious Couple: Symbolic Execution and WCET Analysis
  • 2013
  • In: OpenAccess Series in Informatics. - 2190-6807. - 9783939897545 ; 30, s. 53-63
  • Conference paper (peer-reviewed)abstract
    • We have recently shown that symbolic execution together with the implicit path enumeration technique can successfully be applied in the Worst-Case Execution Time (WCET) analysis of programs. Symbolic execution offers a precise framework for program analysis and tracks complex program properties by analyzing single program paths in isolation. This path-wise program exploration of symbolic execution is, however, computationally expensive, which often prevents full symbolic analysis of larger applications: the number of paths in a program increases exponentially with the number of conditionals, a situation denoted as the path explosion problem. Therefore, for applying symbolic execution in the timing analysis of programs, we propose to use WCET analysis as a guidance for symbolic execution in order to avoid full symbolic coverage of the program. By focusing only on paths or program fragments that are relevant for WCET analysis, we keep the computational costs of symbolic execution low. Our WCET analysis also profits from the precise results derived via symbolic execution. In this article we describe how use-cases of symbolic execution are materialized in the r-TuBound toolchain and present new applications of WCET-guided symbolic execution for WCET analysis. The new applications of selective symbolic execution are based on reducing the effort of symbolic analysis by focusing only on relevant program fragments. By using partial symbolic program coverage obtained by selective symbolic execution, we improve the WCET analysis and keep the effort for symbolic execution low.
  •  
8.
  • Blanc, Regis, et al. (author)
  • Tree Interpolation in Vampire
  • 2013
  • In: Proceedings of the 19th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR-19), December 14-19, 2013, Stellenbosch, South Africa. Kenneth L. McMillan and Aart Middeldorp and Andrei Voronkov (editors), Springer Lecture Notes in Computer Science. - Berlin, Heidelberg : Springer Berlin Heidelberg. ; LNCS 8312, s. 173-181
  • Conference paper (peer-reviewed)abstract
    • We describe new extensions of the Vampire theorem prover for computing tree interpolants. These extensions generalize Craig interpolation in Vampire, and can also be used to derive sequence interpolants. We evaluated our implementation on a large number of examples over the theory of linear integer arithmetic and integer-indexed arrays, with and without quantifiers. When compared to other methods, our experiments show that some examples could only be solved by our implementation.
  •  
9.
  • Capilla, Rafael, 1964, et al. (author)
  • Systems and Software Variability Management: Concepts, Tools and Experiences
  • 2013
  • Book (other academic/artistic)abstract
    • The success of product line engineering techniques in the last 15 years has popularized the use of software variability as a key modeling approach for describing the commonality and variability of systems at all stages of the software lifecycle. Software product lines enable a family of products to share a common core platform, while allowing for product specific functionality being built on top of the platform. Many companies have exploited the concept of software product lines to increase the resources that focus on highly differentiating functionality and thus improve their competitiveness with higher quality and reusable products and decreasing the time-to-market condition. Many books on product line engineering either introduce specific product line techniques or include brief summaries of industrial cases. From these sources, it is difficult to gain a comprehensive understanding of the various dimensions and aspects of software variability. Here the editors address this gap by providing a comprehensive reference on the notion of variability modeling in the context of software product line engineering, presenting an overview of the techniques proposed for variability modeling and giving a detailed perspective on software variability management. Their book is organized in four main parts, which guide the reader through the various aspects and dimensions of software variability. Part 1 which is mostly written by the editors themselves introduces the major topics related to software variability modeling, thus providing a multi-faceted view of both technological and management issues. Next, part 2 of the book comprises four separate chapters dedicated to research and commercial tools. Part 3 then continues with the most practical viewpoint of the book presenting three different industry cases on how variability is managed in real industry projects. Finally, part 4 concludes the book and encompasses six different chapters on emerging research topics in software variability like e.g. service-oriented or dynamic software product lines, or variability and aspect orientation. Each chapter briefly summarizes “What you will learn in this chapter”, so both expert and novice readers can easily locate the topics dealt with. Overall, the book captures the current state of the art and best practices, and indicates important open research challenges as well as possible pitfalls. Thus it serves as a reference for researchers and practitioners in software variability management, allowing them to develop the next set of solutions, techniques and methods in this complicated and yet fascinating field of software engineering.
  •  
10.
  • Caporuscio, Mauro, 1975-, et al. (author)
  • RESTful Service Architectures for Pervasive Networking Environments
  • 2011
  • In: REST. - New York, NY : Springer. - 9781441983022 ; , s. 401-422
  • Book chapter (peer-reviewed)abstract
    • Computing facilities are an essential part of the fabric of our society, and an ever-increasing number of computing devices is deployed within the environment in which we live. The vision of pervasive computing is becoming real. To exploit the opportunities offered by pervasiveness, we need to revisit the classic software development methods to meet new requirements: (1) pervasive applications should be able to dynamically configure themselves, also benefiting from third-party functionalities discovered at run time and (2) pervasive applications should be aware of, and resilient to, environmental changes. In this chapter we focus on the software architecture, with the goal of facilitating both the development and the run-time adaptation of pervasive applications. More specifically we investigate the adoption of the REST architectural style to deal with pervasive environment issues. Indeed, we believe that, although REST has been introduced by observing and analyzing the structure of the Internet, its field of applicability is not restricted to it. The chapter also illustrates a proof-of-concept example, and then discusses the advantages of choosing REST over other styles in pervasive environments.
  •  
11.
  • Dragan, I., et al. (author)
  • Bound Propagation for Arithmetic Reasoning in Vampire
  • 2013
  • In: 2013 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing. - 9781479930357 ; 2013, s. 169-176
  • Conference paper (peer-reviewed)abstract
    • This paper describes an implementation and experimental evaluation of a recently introduced bound propagation method for solving systems of linear inequalities over the reals and rationals. The implementation is part of the first-order theorem prover Vampire. The input problems are systems of linear inequalities over reals or rationals. Their satisfiability is checked by assigning values to the variables of the system and propagating the bounds on these variables. To make the method efficient, we use various strategies for representing numbers, selecting variable orderings, choosing variable values and propagating bounds. We evaluate our implementation on a large number of examples and compare it with state-of-the-art SMT solvers.
  •  
12.
  • Gidenstam, Anders, et al. (author)
  • Cache-Aware Lock-Free Queues for Multiple Producers/Consumers and Weak Memory Consistency
  • 2010
  • In: Proceedings of the 14th International Conference on Principles of Distributed Systems (OPODIS) 2010. - Berlin, Heidelberg : Springer. - 9783642176524 - 3642176526 ; 6490, s. 302-317
  • Conference paper (peer-reviewed)abstract
    • A lock-free FIFO queue data structure is presented in this paper. The algorithm supports multiple producers and multiple consumers and weak memory models. It has been designed to be cache-aware and work directly on weak memory models. It utilizes the cache behavior in concert with lazy updates of shared data, and a dynamic lock-free memory management scheme to decrease unnecessary synchronization and increase performance. Experiments on an 8-way multi-core platform show significantly better performance for the new algorithm compared to previous fast lock-free algorithms.
  •  
13.
  • Knoop, Jens, et al. (author)
  • WCET Squeezing: On-Demand Feasibility Refinement for Proven Precise WCET-Bounds
  • 2013
  • In: Proceedings of the 21st International Conference on Real-Time Networks and Systems (RTNS 2013), October 17-18, 2013, Sophia Antipolis, France. Michel Auguin and Robert de Simone and Robert Davis and Emmanuel Grolleau (editors), ACM. - New York, NY, USA : ACM. - 9781450320580 ; , s. 161-170
  • Conference paper (peer-reviewed)abstract
    • The Worst-Case Execution Time (WCET) computed by a WCET analyzer is usually not tight, leaving a gap between the actual and the computed WCET of a program. In this article we present a novel on-demand WCET feasibility refinement technique, called WCET Squeezing, for minimizing this gap.WCET Squeezing provides conceptually new means for addressing the classical problem of WCET computation, by deriving a WCET bound that comes as close as possible to the actual one. WCET Squeezing is an anytime algorithm, that is, it can be stopped at any time without violating the soundness of its results. This anytime property allows to apply WCET Squeezing not only for deriving precise WCET bounds but to also prove additional timing constraints over the program. Namely, WCET Squeezing can be used to guarantee that a program is fast enough by ensuring that the WCET of the program is below some required limit. If the initially computed WCET of the program is above this limit, WCET Squeezing can be stopped as soon as the squeezed WCET of the program is below the limit (proving the program meets the required timing constraint), or if the squeezed WCET is tight but above the given limit (proving the program cannot meet the timing constraint). WCET Squeezing can also be used until a given time budget is exhausted to compute a tight(er) WCET bound for a program. These new applications of WCET Squeezing are out of the scope of traditional WCET analyzers.WCET Squeezing combines symbolic program execution with the Implicit Path Enumeration Technique (IPET) for computing a precise WCET bound. WCET Squeezing is applicable as a post-process to any WCET analyzer which encodes the IPET problem as an Integer Linear Program (ILP). We implemented our method in the r-TuBound toolchain and evaluated our implementation on a set examples taken from the Mälardalen WCET benchmark suite. Our experiments demonstrate that WCET Squeezing can significantly tighten the WCET bounds of programs. Moreover, the derived WCET bounds are proven to be precise at a moderate computational cost.
  •  
14.
  • Kokkinakis, Dimitrios, 1965, et al. (author)
  • Query Logs as a Corpus.
  • 2013
  • In: Corpus Linguistics 2013 : abstract book. Lancaster: UCREL / edited by Andrew Hardie and Robbie Love.
  • Conference paper (other academic/artistic)abstract
    • This paper provides a detailed description of a large Swedish health-related query log corpus and explores means to derive useful statistics, their distributions and analytics from its content across several dimensions. Information acquisition from query logs can be useful for several purposes and potential types of users, such as terminologists, infodemiologists / epidemiologists, medical data and web analysts, specialists in NLP technologies such as information retrieval and text mining but also public officials in health and safety organizations.
  •  
15.
  • Kovacs, Laura, 1980, et al. (author)
  • A Parametric Interpolation Framework for First-Order Theories
  • 2013
  • In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Berlin, Heidelberg : Springer Berlin Heidelberg. - 1611-3349 .- 0302-9743. - 9783642451133 ; LNCS 8265:PART 1, s. 24-40
  • Conference paper (peer-reviewed)abstract
    • Craig interpolation is successfully used in both hardware and softwaremodel checking. Generating good interpolants, and hence automatic understanding of the quality of interpolants is however a very hard problem,requiring non-trivial reasoning in first-order theories.An important class of state-of-the-art interpolation algorithmsis based on recursive procedures that generate interpolantsfrom refutations of unsatisfiable conjunctions of formulas.We analyze this type of algorithms and develop a theoretical framework,called a parametric interpolationframework, for arbitrary first-order theories and inference systems.As interpolation-based verification approaches depend on the quality of interpolants,our method can be used to deriveinterpolants of different structure and strength, withor without quantifiers, from the same proof.We show that some well-known interpolation algorithms are instantiations of our framework.
  •  
16.
  •  
17.
  • Kovacs, Laura, 1980, et al. (author)
  • The Inverse Method for Many-Valued Logics
  • 2013
  • In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Berlin, Heidelberg : Springer Berlin Heidelberg. - 1611-3349 .- 0302-9743. - 9783642451133 ; LNCS 8265:PART 1, s. 12-23
  • Conference paper (peer-reviewed)abstract
    • We define an automatic proof procedure for finitely many-valued logics given by truth tables. The proof procedure is based on the inverse method. To define this procedure, we introduce so-called introduction-based sequent calculi. By studying proof-theoretic properties of these calculi we derive efficient validity- and satisfiability-checking procedures based on the inverse method. We also show how to translate the validity problem for a formula to unsatisfiability checking of a set of propositional clauses.
  •  
18.
  • Osman, Mohd Hafeez, et al. (author)
  • Condensing reverse engineered class diagrams through class name based abstraction
  • 2014
  • In: 2014 4th World Congress on Information and Communication Technologies, WICT 2014. - 9781479981151
  • Conference paper (peer-reviewed)abstract
    • © 2014 IEEE.In this paper, we report on a machine learning approach to condensing class diagrams. The goal of the algorithm is to learn to identify what classes are most relevant to include in the diagram, as opposed to full reverse engineering of all classes. This paper focuses on building a classifier that is based on the names of classes in addition to design metrics, and we compare to earlier work that is based on design metrics only. We assess our condensation method by comparing our condensed class diagrams to class diagrams that were made during the original forward design. Our results show that combining text metrics with design metrics leads to modest improvements over using design metrics only. On average, the improvement reaches 5.3%. 7 out of 10 evaluated case studies show improvement ranges from 1% to 22%.
  •  
19.
  •  
20.
  • Rana, Rakesh, et al. (author)
  • Selecting software reliability growth models and improving their predictive accuracy using historical projects data
  • 2014
  • In: Journal of Systems and Software. - : Elsevier BV. - 0164-1212 .- 1873-1228. ; 98, s. 59-78
  • Journal article (peer-reviewed)abstract
    • During software development two important decisions organizations have to make are: how to allocate testing resources optimally and when the software is ready for release. SRGMs (software reliability growth models) provide empirical basis for evaluating and predicting reliability of software systems. When using SRGMs for the purpose of optimizing testing resource allocation, the model's ability to accurately predict the expected defect inflow profile is useful. For assessing release readiness, the asymptote accuracy is the most important attribute. Although more than hundred models for software reliability have been proposed and evaluated over time, there exists no clear guide on which models should be used for a given software development process or for a given industrial domain. Using defect inflow profiles from large software projects from Ericsson, Volvo Car Corporation and Saab, we evaluate commonly used SRGMs for their ability to provide empirical basis for making these decisions. We also demonstrate that using defect intensity growth rate from earlier projects increases the accuracy of the predictions. Our results show that Logistic and Gompertz models are the most accurate models; we further observe that classifying a given project based on its expected shape of defect inflow help to select the most appropriate model. (C) 2014 Elsevier Inc. All rights reserved.
  •  
21.
  • Stefan, Deian, 1988, et al. (author)
  • Protecting Users by Confining JavaScript with COWL
  • 2014
  • In: Symposium on Operating Systems Design and Implementation (OSDI 2014).
  • Conference paper (peer-reviewed)abstract
    • Modern web applications are conglomerations ofJavaScript written by multiple authors: application developers routinely incorporate code from third-party libraries, and mashup applications synthesize data and code hosted at different sites. In current browsers, a web application’s developer and user must trust third-party code in libraries not to leak the user’s sensitive information from within applications. Even worse, in the status quo, the only way to implement some mashups is for the user to give her login credentials for one site to the operator of another site. Fundamentally, today’s browser security model trades privacy for flexibility because it lacks a sufficient mechanism for confining untrusted code. We present COWL, a robust JavaScript confinement system for modern web browsers. COWL introduces label-based mandatory access control to browsing contexts in a way that is fully backward compatible with legacy web content. We use a series of case-study applications to motivate COWL’s design and demonstrate how COWL allows both the inclusion of untrusted scripts in applications and the building of mashups that combine sensitive information from multiple mutually distrusting origins, all while protecting users’ privacy. Measurements of two COWL implementations, one inFirefox and one in Chromium, demonstrate a virtuallyimperceptible increase in page-load latency.
  •  
22.
  • Sundell, Håkan, 1968, et al. (author)
  • A Lock-Free Algorithm for Concurrent Bags
  • 2011
  • In: 23rd ACM Symposium on Parallelism in Algorithms and Architectures, SPAA'11.San Jose, 4-6 June 2011. - New York, NY, USA : ACM. - 9781450307437 ; , s. 335-344
  • Reports (other academic/artistic)abstract
    • A lock-free bag data structure supporting unordered buffering is presented in this paper. The algorithmsupports multiple producers and multiple consumers, as well as dynamic collection sizes. To handle concurrencyefficiently, the algorithm was designed to thrive for disjoint-access-parallelism for the supportedsemantics. Therefore, the algorithm exploits a distributed design combined with novel techniques for handlingconcurrent modifications of linked lists using double marks, detection of total emptiness, and efficientmemory management. Experiments on a 24-way multi-core platform show significantly better performancefor the new algorithm compared to previous algorithms of relevance.Keywords: concurrent; data structure; non-blocking; shared memory;
  •  
23.
  • Vogel, Bahtijar, 1980- (author)
  • An Open Architecture Approach for the Design and Development of Web and Mobile Software
  • 2014
  • Doctoral thesis (other academic/artistic)abstract
    • The rapid evolution of web and mobile technologies as well as open standards are important ingredients for developing open software applications. HTML5, affordable electronics, and connectivity costs are some of the trends that drive the web towards an open platform and lead to an increased use of distributed applications. Proprietary software technologies have been extensively deployed throughout multiple platforms, including desktop, web, and mobile systems. Such systems are closed in many cases. Thus, it is rather difficult to expand existing and create additional features for them. Web and mobile software development is fragmented with the existence of multiple browsers and mobile operating systems, that comply differently with web standards. The evolution of web and mobile technologies, coupled with the changes in the deployment environments in which they operate, has resulted in complex requirements that are challenging to satisfy. Additionally, the largest part of the development lifecycle is related to the need to constantly change/modify these software systems within a short-time period. The fact that these systems evolve over time makes it difficult to meet the changing requirements.In this thesis, we offer a novel open architecture approach in the area of web and mobile software design and development when dealing with heterogeneous device environments, together with constantly evolving and dynamic requirements. This approach is grounded on our experiences gained during the last four years of project work regarding the development of a web and mobile software system to support mobile inquiry learning. This case served as a testbed for experimentation with heterogeneous device environments. After five development iterations, our software solution is considered robust, flexible, and expandable as a platform. Among others, this was validated with being tested with more than 500 users. The open architecture approach is also grounded on a literature survey of state of the art projects and definitions related to this concept. The outcomes of this thesis show that an open architecture approach is characterized by flexibility, customizability, and extensibility, which are instantiated into a set of properties. The importance of stressing these three characteristics and their properties in the open architecture approach is based on the identified needs of using open source components, using open data standards, and reducing development time. The research efforts in this thesis resulted in a refined definition of an open architecture approach as well as the initial and refined models that are contextualized within the field of web and mobile software. For validation of the research, the Goal Question Metric (GQM) approach is adapted and extended with a layer of Tasks/Activities. The data is collected from the project work mentioned above and three follow-up cases. The results show that the benefits of an open architecture approach can be reflected in terms of: achievement of the software system’s long-term goals; reduced development time; and increased satisfaction of the users. These benefits refer to the possibility to easily adapt emerging technologies and address dynamic changes and requirements. The contributions of this thesis are threefold: (1) for researchers, our open architecture approach could be used to analyze a system from a top down perspective; (2) for developers, it could be used as an approach to identify and address the needs for building an open evolvable system from a bottom up perspective; (3) for domain experts in the technology enhanced learning field, it could be used as a sustainability approach through which to integrate new tools and address complex requirements when designing new educational activities.
  •  
24.
  • Winter, Jeff, et al. (author)
  • Identifying organizational barriers : a case study of usability work when developing software in the automation industry
  • 2014
  • In: JOURNAL OF SYSTEMS AND SOFTWARE. - : Elsevier. - 0164-1212 .- 1873-1228. ; 88, s. 54-73
  • Journal article (peer-reviewed)abstract
    • This study investigates connections between usability efforts and organizational factors. This is an important field of research which so far appears to be insufficiently studied and discussed. It illustrates problems when working with software engineering tasks and usability requirements. It deals with a large company that manufactures industrial robots with an advanced user interface, which wanted to introduce usability KPIs, to improve product quality. The situation in the company makes this difficult, due to a combination of organizational and behavioural factors that led to a "wicked problem" that caused conflicts, breakdowns and barriers. Addressing these problems requires a holistic view that places context in the foreground and technological solutions in the background. Developing the right product requires communication and collaboration between multiple stakeholders. The inclusion of end users, who fully understand their own work context, is vital. Achieving this is dependent on organizational change, and management commitment. One step to beginning this change process may be through studying ways to introduce user-centred design processes. (C) 2013 Elsevier Inc. All rights reserved.
  •  
25.
  • Bello, Luciano, 1981, et al. (author)
  • Towards a Taint Mode for Cloud Computing Web Application
  • 2012
  • In: 7th Workshop on Programming Languages and Analysis for Security. - New York, NY, USA : ACM. - 9781450314411 ; , s. 7:1--7:12-
  • Conference paper (peer-reviewed)abstract
    • Cloud computing is generally understood as the distribution of data and computations over the Internet. Over the past years, there has been a steep increase in web sites using this technology. Unfortunately, those web sites are not exempted from injection flaws and cross-site scripting, two of the most common security risks in web applications. Taint analysis is an automatic approach to detect vulnerabilities. Cloud computing platforms possess several features that, while facilitating the development of web applications, make it difficult to apply off-the-shelf taint analysis techniques. More specifically, several of the existing taint analysis techniques do not deal with persistent storage (e.g. object datastores), opaque objects (objects whose implementation cannot be accessed and thus tracking tainted data becomes a challenge), or a rich set of security policies (e.g. forcing a specific order of sanitizers to be applied). We propose a taint analysis for could computing web applications that consider these aspects. Rather than modifying interpreters or compilers, we provide taint analysis via a Python library for the cloud computing platform Google App Engine (GAE). To evaluate the use of our library, we harden an existing GAE web application against cross-site scripting attacks.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-25 of 951
Type of publication
conference paper (586)
journal article (209)
licentiate thesis (37)
doctoral thesis (34)
book chapter (28)
reports (20)
show more...
editorial collection (11)
editorial proceedings (11)
book (7)
research review (6)
other publication (1)
patent (1)
show less...
Type of content
peer-reviewed (771)
other academic/artistic (176)
pop. science, debate, etc. (4)
Author/Editor
Weyns, Danny (41)
Petersen, Kai (41)
Wohlin, Claes (40)
Bosch, Jan, 1967 (35)
Šmite, Darja (34)
Gorschek, Tony (34)
show more...
Feldt, Robert (27)
Berger, Christian, 1 ... (26)
Fricker, Samuel (26)
Torkar, Richard (23)
Tichy, Matthias, 197 ... (20)
Feldt, Robert, 1972 (19)
Wnuk, Krzysztof (18)
Regnell, Björn (17)
Hansson, Jörgen, 197 ... (16)
Perez-Palacin, Diego (16)
Mendes, Emilia (16)
Torkar, Richard, 197 ... (15)
Herold, Sebastian (15)
Andersson, Jesper (14)
Mirandola, Raffaela (14)
Wingkvist, Anna (14)
Chaudron, Michel, 19 ... (13)
Ericsson, Morgan (13)
Lindström Claessen, ... (13)
Lenhard, Jörg (13)
Staron, Miroslaw, 19 ... (12)
Löwe, Welf (12)
Hähnle, Reiner, 1962 (12)
Heldal, Rogardt, 196 ... (12)
Angelis, Lefteris (12)
Palma, Francis (12)
Grahn, Håkan (11)
Afzal, Wasif (11)
Runeson, Per (11)
Eklund, Ulrik, 1967 (11)
Unterkalmsteiner, Mi ... (10)
Kovacs, Laura, 1980 (10)
Russo, Alejandro, 19 ... (10)
Jansson, Patrik, 197 ... (10)
Gencel, Cigdem (10)
Khurum, Mahvish (10)
Merseguer, Jose (10)
Wirtz, Guido (10)
Börstler, Jürgen (9)
Gorschek, Tony, 1973 (9)
Rausch, Andreas (9)
Barney, Sebastian (9)
Pareto, Lars, 1966 (9)
Svensson, Joel Bo, 1 ... (9)
show less...
University
Chalmers University of Technology (310)
Blekinge Institute of Technology (308)
University of Gothenburg (126)
Linnaeus University (108)
Royal Institute of Technology (56)
Uppsala University (55)
show more...
Mälardalen University (39)
Lund University (34)
Karlstad University (34)
Umeå University (17)
Linköping University (17)
University of Skövde (15)
Malmö University (11)
Kristianstad University College (10)
Örebro University (8)
University of Borås (6)
Halmstad University (5)
RISE (5)
University West (4)
Jönköping University (4)
Stockholm University (3)
Mid Sweden University (2)
Swedish University of Agricultural Sciences (2)
Luleå University of Technology (1)
Stockholm School of Economics (1)
Karolinska Institutet (1)
show less...
Language
English (949)
Swedish (2)
Research subject (UKÄ/SCB)
Natural sciences (951)
Engineering and Technology (73)
Social Sciences (50)
Medical and Health Sciences (6)
Humanities (5)
Agricultural Sciences (2)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view