SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Angelis Lefteris) "

Sökning: WFRF:(Angelis Lefteris)

  • Resultat 1-28 av 28
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Angelis, Lefteris, et al. (författare)
  • A Framework of Statistical and Visualization Techniques for Missing Data Analysis in Software Cost Estimation
  • 2018
  • Ingår i: Computer Systems and Software Engineering. - : IGI Global. - 9781522539230 - 9781522539247 ; , s. 433-460
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • Software Cost Estimation (SCE) is a critical phase in software development projects. However, due to the growing complexity of the software itself, a common problem in building software cost models is that the available datasets contain lots of missing categorical data. The purpose of this chapter is to show how a framework of statistical, computational, and visualization techniques can be used to evaluate and compare the effect of missing data techniques on the accuracy of cost estimation models. Hence, the authors use five missing data techniques: Multinomial Logistic Regression, Listwise Deletion, Mean Imputation, Expectation Maximization, and Regression Imputation. The evaluation and the comparisons are conducted using Regression Error Characteristic curves, which provide visual comparison of different prediction models, and Regression Error Operating Curves, which examine predictive power of models with respect to under- or over-estimation.
  •  
2.
  • Angelis, Lefteris, et al. (författare)
  • Methods for Statistical and Visual Comparison of Imputation Methods for Missing Data in Software Cost Estimation
  • 2011
  • Ingår i: Modern Software Engineering Concepts and Practices. - : IGI Global. - 9781609602154 - 9781609602178 ; , s. 221-241
  • Bokkapitel (refereegranskat)abstract
    • Software Cost Estimation is a critical phase in the development of a software project, and over the years has become an emerging research area. A common problem in building software cost models is that the available datasets contain projects with lots of missing categorical data. The purpose of this chapter is to show how a combination of modern statistical and computational techniques can be used to compare the effect of missing data techniques on the accuracy of cost estimation. Specifically, a recently proposed missing data technique, the multinomial logistic regression, is evaluated and compared with four older methods: listwise deletion, mean imputation, expectation maximization and regression imputation with respect to their effect on the prediction accuracy of a least squares regression cost model. The evaluation is based on various expressions of the prediction error and the comparisons are conducted using statistical tests, resampling techniques and a visualization tool, the regression error characteristic curves.
  •  
3.
  • Azhar, Damir, et al. (författare)
  • Using ensembles for web effort estimation
  • 2013
  • Konferensbidrag (refereegranskat)abstract
    • Background: Despite the number of Web effort estimation techniques investigated, there is no consensus as to which technique produces the most accurate estimates, an issue shared by effort estimation in the general software estimation domain. A previous study in this domain has shown that using ensembles of estimation techniques can be used to address this issue. Aim: The aim of this paper is to investigate whether ensembles of effort estimation techniques will be similarly successful when used on Web project data. Method: The previous study built ensembles using solo effort estimation techniques that were deemed superior. In order to identify these superior techniques two approaches were investigated: The first involved replicating the methodology used in the previous study, while the second approach used the Scott-Knott algorithm. Both approaches were done using the same 90 solo estimation techniques on Web project data from the Tukutuku dataset. The replication identified 16 solo techniques that were deemed superior and were used to build 15 ensembles, while the Scott-Knott algorithm identified 19 superior solo techniques that were used to build two ensembles. Results: The ensembles produced by both approaches performed very well against solo effort estimation techniques. With the replication, the top 12 techniques were all ensembles, with the remaining 3 ensembles falling within the top 17 techniques. These 15 effort estimation ensembles, along with the 2 built by the second approach, were grouped into the best cluster of effort estimation techniques by the Scott-Knott algorithm. Conclusion: While it may not be possible to identify a single best technique, the results suggest that ensembles of estimation techniques consistently perform well even when using Web project data
  •  
4.
  • Baliakas, Panagiotis, et al. (författare)
  • Clinical effect of stereotyped B-cell receptor immunoglobulins in chronic lymphocytic leukaemia: a retrospective multicentre study
  • 2014
  • Ingår i: The Lancet Haematology. - 2352-3026. ; 1:2, s. 74-84
  • Tidskriftsartikel (refereegranskat)abstract
    • Background About 30% of cases of chronic lymphocytic leukaemia (CLL) carry quasi-identical B-cell receptor immunoglobulins and can be assigned to distinct stereotyped subsets. Although preliminary evidence suggests that B-cell receptor immunoglobulin stereotypy is relevant from a clinical viewpoint, this aspect has never been explored in a systematic manner or in a cohort of adequate size that would enable clinical conclusions to be drawn. Methods For this retrospective, multicentre study, we analysed 8593 patients with CLL for whom immunogenetic data were available. These patients were followed up in 15 academic institutions throughout Europe (in Czech Republic, Denmark, France, Greece, Italy, Netherlands, Sweden, and the UK) and the USA, and data were collected between June 1, 2012, and June 7, 2013. We retrospectively assessed the clinical implications of CLL B-cell receptor immunoglobulin stereotypy, with a particular focus on 14 major stereotyped subsets comprising cases expressing unmutated (U-CLL) or mutated (M-CLL) immunoglobulin heavy chain variable genes. The primary outcome of our analysis was time to first treatment, defined as the time between diagnosis and date of first treatment. Findings 2878 patients were assigned to a stereotyped subset, of which 1122 patients belonged to one of 14 major subsets. Stereotyped subsets showed significant differences in terms of age, sex, disease burden at diagnosis, CD38 expression, and cytogenetic aberrations of prognostic significance. Patients within a specific subset generally followed the same clinical course, whereas patients in different stereotyped subsets-despite having the same immunoglobulin heavy variable gene and displaying similar immunoglobulin mutational status-showed substantially different times to first treatment. By integrating B-cell receptor immunoglobulin stereotypy (for subsets 1, 2, and 4) into the well established Dohner cytogenetic prognostic model, we showed these, which collectively account for around 7% of all cases of CLL and represent both U-CLL and M-CLL, constituted separate clinical entities, ranging from very indolent (subset 4) to aggressive disease (subsets 1 and 2). Interpretation The molecular classification of chronic lymphocytic leukaemia based on B-cell receptor immunoglobulin stereotypy improves the Dohner hierarchical model and refines prognostication beyond immunoglobulin mutational status, with potential implications for clinical decision making, especially within prospective clinical trials.
  •  
5.
  • Baliakas, Panagiotis, et al. (författare)
  • Not all IGHV3-21 chronic lymphocytic leukemias are equal: prognostic considerations.
  • 2015
  • Ingår i: Blood. - : American Society of Hematology. - 1528-0020 .- 0006-4971. ; 125:5, s. 856-859
  • Tidskriftsartikel (refereegranskat)abstract
    • An unresolved issue in chronic lymphocytic leukemia (CLL) is whether IGHV3-21 gene usage, in general, or the expression of stereotyped B-cell receptor immunoglobulin defining subset #2 (IGHV3-21/IGLV3-21), in particular, determines outcome for IGHV3-21-utilizing cases. We reappraised this issue in 8593 CLL patients of whom 437 (5%) used the IGHV3-21 gene with 254/437 (58%) classified as subset #2. Within subset #2, immunoglobulin heavy variable (IGHV)-mutated cases predominated, whereas non-subset #2/IGHV3-21 was enriched for IGHV-unmutated cases (P = .002). Subset #2 exhibited significantly shorter time-to-first-treatment (TTFT) compared with non-subset #2/IGHV3-21 (22 vs 60 months, P = .001). No such difference was observed between non-subset #2/IGHV3-21 vs the remaining CLL with similar IGHV mutational status. In conclusion, IGHV3-21 CLL should not be axiomatically considered a homogeneous entity with adverse prognosis, given that only subset #2 emerges as uniformly aggressive, contrasting non-subset #2/IGVH3-21 patients whose prognosis depends on IGHV mutational status as the remaining CLL.
  •  
6.
  • Barney, Sebastian, et al. (författare)
  • Offshore insourcing : A case study on software quality alignment
  • 2011
  • Ingår i: 2011 IEEE Sixth International Conference on Global Software Engineering. - Helsinki : IEEE. - 9781457711404 - 9780769545035 ; , s. 146-155
  • Konferensbidrag (refereegranskat)abstract
    • Background: Software quality issues are commonly reported when off shoring software development. Value-based software engineering addresses this by ensuring key stakeholders have a common understanding of quality. Aim: This work seeks to understand the levels of alignment between key stakeholders on aspects of software quality for two products developed as part of an offshore in sourcing arrangement. The study further aims to explain the levels of alignment identified. Method: Representatives of key stakeholder groups for both products ranked aspects of software quality. The results were discussed with the groups to gain a deeper understanding. Results: Low levels of alignment were found between the groups studied. This is associated with insufficiently defined quality requirements, a culture that does not question management and conflicting temporal reflections on the product's quality. Conclusion: The work emphasizes the need for greater support to align success-critical stakeholder groups in their understanding of quality when off shoring software development
  •  
7.
  • Barney, Sebastian, et al. (författare)
  • Software quality across borders : Three case studies on company internal alignment
  • 2014
  • Ingår i: Information and Software Technology. - : Elsevier. - 0950-5849 .- 1873-6025. ; 56:1, s. 20-38
  • Tidskriftsartikel (refereegranskat)abstract
    • Context: Software quality issues are commonly reported when offshoring software development. Value-based software engineering addresses this by ensuring key stakeholders have a common understanding of quality.Objective: This work seeks to understand the levels of alignment between key stakeholder groups within a company on the priority given to aspects of software quality developed as part of an offshoring relationship. Furthermore, the study aims to identify factors impacting the levels of alignment identified.Method: Three case studies were conducted, with representatives of key stakeholder groups ranking aspects of software quality in a hierarchical cumulative exercise. The results are analysed using Spearman rank correlation coefficients and inertia. The results were discussed with the groups to gain a deeper understanding of the issues impacting alignment.Results: Various levels of alignment were found between the various groups. The reasons for misalignment were found to include cultural factors, control of quality in the development process, short-term versus long-term orientations, understanding of cost-benefits of quality improvements, communication and coordination.Conclusions: The factors that negatively affect alignment can vary greatly between different cases. The work emphasises the need for greater support to align company internal success-critical stakeholder groups in their understanding of quality when offshoring software development.
  •  
8.
  • Chatzipetrou, Panagiota, Assistant Professor, 1984-, et al. (författare)
  • A multivariate statistical framework for the analysis of software effort phase distribution
  • 2015
  • Ingår i: Information and Software Technology. - : Elsevier. - 0950-5849 .- 1873-6025. ; 59, s. 149-169
  • Tidskriftsartikel (refereegranskat)abstract
    • Context: In software project management, the distribution of resources to various project activities is one of the most challenging problems since it affects team productivity, product quality and project constraints related to budget and scheduling.Objective: The study aims to (a) reveal the high complexity of modelling the effort usage proportion in different phases as well as the divergence from various rules-of-thumb in related literature, and (b) present a systematic data analysis framework, able to offer better interpretations and visualisation of the effort distributed in specific phases.Method: The basis for the proposed multivariate statistical framework is Compositional Data Analysis, a methodology appropriate for proportions, along with other methods like the deviation from rules-ofthumb, the cluster analysis and the analysis of variance. The effort allocations to phases, as reported in around 1500 software projects of the ISBSG R11 repository, were transformed to vectors of proportions of the total effort and were analysed with respect to prime project attributes.Results: The proposed statistical framework was able to detect high dispersion among data, distribution inequality and various interesting correlations and trends, groupings and outliers, especially with respect to other categorical and continuous project attributes. Only a very small number of projects were found close to the rules-of-thumb from the related literature. Significant differences in the proportion of effort spent in different phrases for different types of projects were found.Conclusion: There is no simple model for the effort allocated to phases of software projects. The data from previous projects can provide valuable information regarding the distribution of the effort for various types of projects, through analysis with multivariate statistical methodologies. The proposed statistical framework is generic and can be easily applied in a similar sense to any dataset containing effort allocation to phases.
  •  
9.
  • Chatzipetrou, Panagiota, Assistant Professor, 1984-, et al. (författare)
  • An experience-based framework for evaluating alignment of software quality goals
  • 2015
  • Ingår i: Software quality journal. - : Springer. - 0963-9314 .- 1573-1367. ; 23:4, s. 567-594
  • Tidskriftsartikel (refereegranskat)abstract
    • Efficient quality management of software projects requires knowledge of how various groups of stakeholders involved in software development prioritize the product and project goals. Agreements or disagreements among members of a team may originate from inherent groupings, depending on various professional or other characteristics. These agreements are not easily detected by conventional practices (discussions, meetings, etc.) since the natural language expressions are often obscuring, subjective, and prone to misunderstandings. It is therefore essential to have objective tools that can measure the alignment among the members of a team; especially critical for the software development is the degree of alignment with respect to the prioritization goals of the software product. The paper proposes an experience-based framework of statistical and graphical techniques for the systematic study of prioritization alignment, such as hierarchical cluster analysis, analysis of cluster composition, correlation analysis, and closest agreement-directed graph. This framework can provide a thorough and global picture of a team's prioritization perspective and can potentially aid managerial decisions regarding team composition and leadership. The framework is applied and illustrated in a study related to global software development where 65 individuals in different roles, geographic locations and professional relationships with a company, prioritize 24 goals from individual perception of the actual situation and for an ideal situation.
  •  
10.
  • Chatzipetrou, Panagiota, Assistant Professor, 1984-, et al. (författare)
  • An Investigation of Software Effort Phase Distribution Using Compositional Data Analysis
  • 2012
  • Ingår i: 38th EUROMICRO Conference on Software Engineering and Advanced Applications, SEAA 2012. - : IEEE. - 9780769547909 ; , s. 367-375
  • Konferensbidrag (refereegranskat)abstract
    • One of the most significant problems faced by project managers is to effectively distribute the project resources and effort among the various project activities. Most importantly, project success depends on how well, or how balanced, the work effort is distributed among the project phases. This paper aims to obtain useful information regarding the correlation of the composition of effort attributed in phases for around 1,500 software projects of the ISBSG R11 database based on a promising statistical method called Compositional Data Analysis (CoDA). The motivation for applying this analysis is the observation that certain types of project data (effort distributions and attributes) do not relate in a direct way but present a spurious correlation. Effort distribution is compared to the project life-cycle activities, organization type, language type, function points and other prime project attributes. The findings are beneficial for building a basis for software cost estimation and improving future empirical software studies.
  •  
11.
  • Chatzipetrou, Panagiota, Assistant Professor, 1984-, et al. (författare)
  • Prioritization of issues and requirements by cumulative voting : A compositional data analysis framework
  • 2010
  • Ingår i: 2010 36th EUROMICRO Conference on Software Engineering and Advanced Applications. - Lille : IEEE. - 9781424479016 ; , s. 361-370
  • Konferensbidrag (refereegranskat)abstract
    • Cumulative Voting (CV), also known as Hundred-Point Method, is a simple and straightforward technique, used in various prioritization studies in software engineering. Multiple stakeholders (users, developers, consultants, marketing representatives or customers) are asked to prioritize issues concerning requirements, process improvements or change management in a ratio scale. The data obtained from such studies contain useful information regarding correlations of issues and trends of the respondents towards them. However, the multivariate and constrained nature of data requires particular statistical analysis. In this paper we propose a statistical framework; the multivariate Compositional Data Analysis (CoDA) for analyzing data obtained from CV prioritization studies. Certain methodologies for studying the correlation structure of variables are applied to a dataset concerning impact analysis issues prioritized by software professionals under different perspectives. These involve filling of zeros, transformation using the geometric mean, principle component analysis on the transformed variables and graphical representation by biplots and ternary plots.
  •  
12.
  • Chatzipetrou, Panagiota, Assistant Professor, 1984-, et al. (författare)
  • Software product quality in global software development : Finding groups with aligned goals
  • 2011
  • Ingår i: 37th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA 2011). - Oulu : IEEE Computer Society. - 9781457710278 ; , s. 435-442
  • Konferensbidrag (refereegranskat)abstract
    • The development of a software product in an organization involves various groups of stakeholders who may prioritize the qualities of the product differently. This paper presents an empirical study of 65 individuals in different roles and in different locations, including on shoring, outsourcing and off shoring, prioritizing 24 software quality aspects. Hierarchical cluster analysis is applied to the prioritization data, separately for the situation today and the ideal situation, and the composition of the clusters, regarding the distribution of the inherent groupings within each of them, is analyzed. The analysis results in observing that the roles are not that important in the clustering. However, compositions of clusters regarding the onshore-offshore relationships are significantly different, showing that the offshore participants have stronger tendency to cluster together. In conclusion, stakeholders seem to form clusters of aligned understanding of priorities according to personal and cultural views rather than their roles in software development.
  •  
13.
  • Chatzipetrou, Panagiota, Assistant Professor, 1984-, et al. (författare)
  • Statistical Analysis of Requirements Prioritization for Transition to Web Technologies : A Case Study in an Electric Power Organization
  • 2014
  • Ingår i: Software Quality. Model-Based Approaches for Advanced Software and Systems Engineering. - Cham : Springer. - 9783319036021 - 9783319036014 ; , s. 63-84
  • Konferensbidrag (refereegranskat)abstract
    • Transition from an existing IT system to modern Web technologies provides multiple benefits to an organization and its customers. Such a transition in a large organization involves various groups of stakeholders who may prioritize differently the requirements of the software under development. In our case study, the organization is a leading domestic company in the field of electricity power. The existing online system supports the customer service along with the technical activities and has more than 1,500 registered users, while simultaneous access can be reached by 300 users. The paper presents an empirical study where 51 employees in different roles prioritize 18 software requirements using hierarchical cumulative voting. The goal of this study is to test significant differences in prioritization between groups of stakeholders. Statistical methods involving data transformation, ANOVA and Discriminant Analysis were applied to data. The results showed significant differences between roles of the stakeholders in certain requirements.
  •  
14.
  • Feldt, Robert, et al. (författare)
  • Links between the personalities, views and attitudes of software engineers
  • 2010
  • Ingår i: Information and Software Technology. - : Elsevier BV. - 0950-5849 .- 1873-6025. ; 52:6, s. 611-624
  • Tidskriftsartikel (refereegranskat)abstract
    • Context:Successful software development and management depends not only on the technologies, methods and processes employed but also on the judgments and decisions of the humans involved. These, in turn, are affected by the basic views and attitudes of the individual engineers.Objective:The objective of this paper is to establish if these views and attitudes can be linked to the personalities of software engineers.Methods:We summarize the literature on personality and software engineering and then describe an empirical study on 47 professional engineers in ten different Swedish software development companies. The study evaluated the personalities of these engineers via the IPIP 50-item five-factor personality test and prompted them on their attitudes towards and basic views on their professional activities.Results:We present extensive statistical analyses of their responses to show that there are multiple, significant associations between personality factors and software engineering attitudes. The tested individuals are more homogeneous in personality than a larger sample of individuals from the general population.Conclusion:Taken together, the methodology and personality test we propose and the associated statistical analyses can help find and quantify relations between complex factors in software engineering projects in both research and practice.
  •  
15.
  •  
16.
  •  
17.
  • Gorschek, Tony, et al. (författare)
  • A large-scale empirical study of practitioners' use of object-oriented concepts
  • 2010
  • Konferensbidrag (refereegranskat)abstract
    • We present the first results from a survey carried out over the second quarter of 2009 examining how theories in object-oriented design are understood and used by software developers. We collected 3785 responses from software developers world-wide, which we believe is the largest survey of its kind. We targeted the use of encapsulation, class size as measured by number of methods, and depth of a class in the inheritance hierarchy. We found that, while overall practitioners followed advice on encapsulation, there was some variation of adherence to it. For class size and depth there was substantially less agreement with expert advice. In addition, inconsistencies were found within the use and perception of object-oriented concepts within the investigated group of developers. The results of this survey has deep reaching consequences for both practitioners and researchers as they highlight and confirm central issues.
  •  
18.
  • Gorschek, Tony, et al. (författare)
  • On the use of software design models in software development practice : An empirical investigation
  • 2014
  • Ingår i: Journal of Systems and Software. - : Elsevier. - 0164-1212. ; 95, s. 176-193
  • Tidskriftsartikel (refereegranskat)abstract
    • Research into software design models in general, and into the UML in particular, focuses on answering the question how design models are used, completely ignoring the question if they are used. There is an assumption in the literature that the UML is the de facto standard, and that use of design models has had a profound and substantial effect on how software is designed by virtue of models giving the ability to do model-checking, code generation, or automated test generation. However for this assumption to be true, there has to be significant use of design models in practice by developers. This paper presents the results of a survey summarizing the answers of 3785 developers answering the simple question on the extent to which design models are used before coding. We relate their use of models with (i) total years of programming experience, (ii) open or closed development, (iii) educational level, (iv) programming language used, and (v) development type. The answer to our question was that design models are not used very extensively in industry, and where they are used, the use is informal and without tool support, and the notation is often not UML. The use of models decreased with an increase in experience and increased with higher level of qualification. Overall we found that models are used primarily as a communication and collaboration mechanism where there is a need to solve problems and/or get a joint understanding of the overall design in a group. We also conclude that models are seldom updated after initially created and are usually drawn on a whiteboard or on paper.
  •  
19.
  • Jalali, Samireh, et al. (författare)
  • Investigating the Applicability of Agility Assessment Surveys : A Case Study
  • 2014
  • Ingår i: Journal of Systems and Software. - : Elsevier. - 0164-1212. ; 98, s. 172-190
  • Tidskriftsartikel (refereegranskat)abstract
    • Context: Agile software development has become popular in the past decade despite that it is not a particularly well-defined concept. The general principles in the Agile Manifesto can be instantiated in many different ways, and hence the perception of Agility may differ quite a lot. This has resulted in several conceptual frameworks being presented in the research literature to evaluate the level of Agility. However, the evidence of actual use in practice of these frameworks is limited. Objective: The objective in this paper is to identify online surveys that can be used to evaluate the level of Agility in practice, and to evaluate the surveys in an industrial setting. Method: Surveys for evaluating Agility were identified by systematically searching the web. Based on an exploration of the surveys found, two surveys were identified as most promising for our objective. The two surveys selected were evaluated in a case study with three Agile teams in a software consultancy company. The case study included a self-assessment of the Agility level by using the two surveys, interviews with the Scrum master and a team representative, interviews with the customers of the teams and a focus group meeting for each team. Results: The perception of team Agility was judged by each of the teams and their respective customer, and the outcome was compared with the results from the two surveys. Agility profiles were created based on the surveys. Conclusions: It is concluded that different surveys may very well judge Agility differently, which support the viewpoint that it is not a well-defined concept. The researchers and practitioners agreed that one of the surveys, at least in this specific case, provided a better and more holistic assessment of the Agility of the teams in the case study.
  •  
20.
  • Khurum, Mahvish, et al. (författare)
  • A Controlled Experiment of a Method for Early Requirements Triage Utilizing Product Strategies
  • 2009
  • Konferensbidrag (refereegranskat)abstract
    • [Context and motivation] In market-driven product development of software intensive products large numbers of requirements threaten to overload the development organization. It is critical for product management to select the requirements aligned with the overall business goals, product strategies and discard others as early as possible. Thus, there is a need for an effective and efficient method that deals with this challenge and supports product managers in the continuous effort of early requirements triage [1, 2] based on product strategies. This paper evaluates such a method - A Method for Early Requirements Triage Utilizing Product Strategies (MERTS), which is built based on the needs identified in literature and industry. [Question/problem] The research question answered in this paper is "If two groups of subjects have a product strategy, one group in NL format and one in MERTS format, will there be a difference between the two groups with regards to effectiveness and efficiency of requirements triage?" The effectiveness and efficiency of the MERTS were evaluated through controlled experiment in a lab environment with 50 software engineering graduate students as subjects. [Principal ideas/results] It was found through results that MERTS method is highly effective and efficient. [Contribution] The contribution of this paper is validation of effectiveness and efficiency of the product strategies created through MERTS method for requirements triage, prior to industry trials. A major limitation of the results is that the experiment was performed with the graduate students and not the product managers. However, the results showed that MERTS is ready for industry trials.
  •  
21.
  • Kuzniarz, Ludwik, et al. (författare)
  • Empirical extension of a classification framework for addressing consistency in model based development
  • 2011
  • Ingår i: Information and Software Technology. - : Elsevier. - 0950-5849 .- 1873-6025. ; 53:3, s. 214-229
  • Tidskriftsartikel (refereegranskat)abstract
    • Context: Consistency constitutes an important aspect in practical realization of modeling ideas in the process of software development and in the related research which is diverse. A classification framework has been developed, in order to aid the model based software construction by categorizing research problems related to consistency. However, the framework does not include information on the importance of classification elements. Objective: The aim was to extend the classification framework with information about the relative importance of the elements constituting the classification. The research question was how to express and obtain this information. Method: A survey was conducted on a sample of 24 stakeholders from academia and industry, with different roles, who answered a quantitative questionnaire. Specifically, the respondents prioritized perspectives and issues using an extended hierarchical voting scheme based on the hundred dollar test. The numerical data obtained were first weighted and normalized and then they were analyzed by descriptive statistics and bar charts. Results: The detailed analysis of the data revealed the relative importance of consistency perspectives and issues under different views, allowing for the desired extension of the classification framework with empirical information. The most highly valued issues come from the pragmatics perspective. These issues are the most important for tool builders and practitioners from industry, while for the responders from academia theory group some issues from the concepts perspective are equally important. Conclusion: The method of using empirical data from a hierarchical cumulative voting scheme for extending existing research classification framework is useful for including information regarding the importance of the classification elements.
  •  
22.
  • Mittas, Nikolaos, et al. (författare)
  • Integrating non-parametric models with linear components for producing software cost estimations
  • 2015
  • Ingår i: Journal of Systems and Software. - : Elsevier. - 0164-1212 .- 1873-1228. ; 99, s. 120-134
  • Tidskriftsartikel (refereegranskat)abstract
    • A long-lasting endeavor in the area of software project management is minimizing the risks caused by under- or over-estimations of the overall effort required to build new software systems. Deciding which method to use for achieving accurate cost estimations among the many methods proposed in the relevant literature is a significant issue for project managers. This paper investigates whether it is possible to improve the accuracy of estimations produced by popular non-parametric techniques by coupling them with a linear component, thus producing a new set of techniques called semi-parametric models (SPMs). The non-parametric models examined in this work include estimation by analogy (EbA), artificial neural networks (ANN), support vector machines (SVM) and locally weighted regression (LOESS). Our experimentation shows that the estimation ability of SPMs is superior to their non-parametric counterparts, especially in cases where both a linear and non-linear relationship exists between software effort and the related cost drivers. The proposed approach is empirically validated through a statistical framework which uses multiple comparisons to rank and cluster the models examined in non-overlapping groups performing significantly different.
  •  
23.
  • Petersen, Kai, et al. (författare)
  • Reasons for bottlenecks in very large-scale system of systems development
  • 2014
  • Ingår i: Information and Software Technology. - : Elsevier. - 0950-5849 .- 1873-6025. ; 56:10, s. 1403-1420
  • Tidskriftsartikel (refereegranskat)abstract
    • Context: System of systems (SOS) is a set or arrangement of systems that results when independent and useful systems are to be incorporated into a larger system that delivers unique capabilities. Our investigation showed that the development life cycle (i.e. the activities transforming requirements into design, code, test cases, and releases) in SoS is more prone to bottlenecks in comparison to single systems. Objective: The objective of the research is to identify reasons for bottlenecks in SoS, prioritize their significance according to their effect on bottlenecks, and compare them with respect to different roles and different perspectives, i.e. SoS view (concerned with integration of systems), and systems view (concerned with system development and delivery). Method: The research method used is a case study at Ericsson AB. Results: Results show that the most significant reasons for bottlenecks are related to requirements engineering. All the different roles agree on the significance of requirements related factors. However, there are also disagreements between the roles, in particular with respect to quality related reasons. Quality related hinders are primarily observed and highly prioritized by quality assurance responsibles. Furthermore, SoS view and system view perceive different hinders, and prioritize them differently. Conclusion: We conclude that solutions for requirements engineering in SoS context are needed, quality awareness in the organization has to be achieved end to end, and views between SoS and system view need to be aligned to avoid sub optimization in improvements.
  •  
24.
  • Ros, Rasmus, et al. (författare)
  • Continuous experimentation scenarios : A case study in e-commerce
  • 2018
  • Ingår i: Proceedings - 44th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2018. - 9781538673829 ; , s. 353-356
  • Konferensbidrag (refereegranskat)abstract
    • Controlled experiments on software variants enable e-commerce companies to increase sales by providing user-adapted functionality. Our goal is to understand how the context of experimentation influences tool support. We performed a case study at Apptus that develops algorithms for e-commerce. We investigated how the case company uses experiments through five semi-structured interviews. We identified four main scenarios of experimentation and found that there are stark differences in tool support for them. The scenarios illustrate that the aptness of tool support for experiments depend on four characteristics: (1) what the goal of the experiment is; validate or optimize, (2) whether the experiment is performed internally in the organisation or externally, (3) whether decisions are taken automatically or manually, and finally (4) whether the experiment should be repeated or is a singleton. These insight can be used by practitioners with an interest in efficient experimentation and to form a basis for further research into a taxonomy of experiments for software.
  •  
25.
  • Rovegård, Per, et al. (författare)
  • An Empirical Study on Views of Importance of Change Impact Analysis Issues
  • 2008
  • Ingår i: IEEE Transactions on Software Engineering. - : IEEE. - 0098-5589. ; 34:4, s. 513-530
  • Tidskriftsartikel (refereegranskat)abstract
    • Change impact analysis (IA) is a change management activity that previously has been much studied from a technical perspective. For example, much work focuses on methods for determining the impact of a change. In this paper, we present results from a study on the role of IA in the change management process. In the study, IA issues were prioritized with respect to criticality by software professionals from an organizational perspective and a self-perspective. The software professionals belonged to three organizational levels: operative, tactical, and strategic. Qualitative and statistical analyses with respect to differences between perspectives and levels are presented. The results show that important issues for a particular level are tightly related to how the level is defined. Similarly, issues important from an organizational perspective are more holistic than those important from a self-perspective. However, our data indicate that the self-perspective colors the organizational perspective, meaning that personal opinions and attitudes cannot be easily disregarded. In comparing the perspectives and the levels, we visualize the differences in a way that allows us to discuss two classes of issues: high priority and medium priority. The most important issues from this point of view concern fundamental aspects of IA and its execution.
  •  
26.
  • Tempero, Ewan, et al. (författare)
  • Barriers to Refactoring
  • 2017
  • Ingår i: Communications of the ACM. - : ASSOC COMPUTING MACHINERY. - 0001-0782 .- 1557-7317. ; 60:10, s. 54-61
  • Tidskriftsartikel (refereegranskat)abstract
    • REFACTORING(6) IS SOMETHING software developers like to do. They refactor a lot. But do they refactor as much as they would like? Are there barriers that prevent them from doing so? Refactoring is an important tool for improving quality. Many development methodologies rely on refactoring, especially for agile methodologies but also in more plan-driven organizations. If barriers exist, they would undermine the effectiveness of many product-development organizations. We conducted a large-scale survey in 2009 of 3,785 practitioners' use of object-oriented concepts, 7 including questions as to whether they would refactor to deal with certain design problems. We expected either that practitioners would tell us our choice of design principles was inappropriate for basing a refactoring decision or that refactoring is the right decision to take when designs were believed to have quality problems. However, we were told the decision of whether or not to refactor was due to non-design considerations. It is now eight years since the survey, but little has changed in integrated development environment (IDE) support for refactoring, and what has changed has done little to address the barriers we identified.
  •  
27.
  •  
28.
  • Wohlin, Claes, et al. (författare)
  • The success factors powering industry-academia collaboration
  • 2011
  • Ingår i: IEEE Software. - : IEEE Computer Society. - 0740-7459 .- 1937-4194. ; 29:2, s. 67-73
  • Tidskriftsartikel (refereegranskat)abstract
    • Collaboration between industry and academia supports improvement and innovation in industry and helps to ensure industrial relevance in academic research. This article presents an exploratory study of the factors for successful collaboration between industry and academia in software research.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-28 av 28
Typ av publikation
tidskriftsartikel (15)
konferensbidrag (11)
bokkapitel (2)
Typ av innehåll
refereegranskat (26)
övrigt vetenskapligt/konstnärligt (2)
Författare/redaktör
Angelis, Lefteris (25)
Chatzipetrou, Panagi ... (10)
Wohlin, Claes (9)
Feldt, Robert (4)
Mittas, Nikolaos (4)
Torkar, Richard (3)
visa fler...
Andreou, Andreas S. (3)
Papatheocharous, Efi (3)
Smedby, Karin E. (2)
Mansouri, Larry (2)
Juliusson, Gunnar (2)
Grahn, Håkan (2)
Geisler, Christian H (2)
Agathangelidis, Andr ... (2)
Davi, Frederic (2)
Langerak, Anton W. (2)
Baliakas, Panagiotis (2)
Stamatopoulos, Kosta ... (2)
Scarfo, Lydia (2)
Sutton, Lesley-Ann (2)
Ghia, Paolo (2)
Hadzidimitriou, Anas ... (2)
Belessi, Chrysoula (2)
Darzentas, Nikos (2)
Yan, Xiao-Jie (2)
Davis, Zadie (2)
Chu, Charles C. (2)
Giudicelli, Veroniqu ... (2)
Pedersen, Lone Bredo (2)
Anagnostopoulos, Ach ... (2)
Pospisilova, Sarka (2)
Lefranc, Marie-Paule (2)
Chiorazzi, Nicholas (2)
Panagiotidis, Panagi ... (2)
Nguyen-Khac, Florenc ... (2)
Gardiner, Anne (2)
Plevova, Karla (2)
Minga, Eva (2)
Oscier, David (2)
Tsanousa, Athina (2)
Shanafelt, Tait (2)
Sandberg, Yorick (2)
Vojdeman, Fie Juhl (2)
Boudjogra, Myriam (2)
Tzenou, Tatiana (2)
Chatzouli, Maria (2)
Veronese, Silvio (2)
van Lom, Kirsten (2)
Francova, Hana Skuhr ... (2)
Facco, Monica (2)
visa färre...
Lärosäte
Blekinge Tekniska Högskola (17)
Örebro universitet (10)
Lunds universitet (5)
Uppsala universitet (2)
Högskolan Väst (2)
RISE (2)
visa fler...
Karolinska Institutet (2)
Linnéuniversitetet (1)
visa färre...
Språk
Engelska (28)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (24)
Teknik (4)
Samhällsvetenskap (3)
Medicin och hälsovetenskap (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy