SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Mendes Emilia) "

Sökning: WFRF:(Mendes Emilia)

  • Resultat 1-50 av 83
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Dias, N., et al. (författare)
  • Outcomes of Elective and Non-elective Fenestrated-branched Endovascular Aortic Repair for Treatment of Thoracoabdominal Aortic Aneurysms
  • 2023
  • Ingår i: Annals of Surgery. - : Lippincott Williams & Wilkins. - 0003-4932 .- 1528-1140. ; 278:4, s. 568-577, s. 568-577
  • Tidskriftsartikel (refereegranskat)abstract
    • Objective: To describe outcomes after elective and non-elective fenestrated-branched endovascular aortic repair (FB-EVAR) for thoracoabdominal aortic aneurysms (TAAAs).Background: FB-EVAR has been increasingly utilized to treat TAAAs; however, outcomes after non-elective versus elective repair are not well described.Methods: Clinical data of consecutive patients undergoing FB-EVAR for TAAAs at 24 centers (2006-2021) were reviewed. Endpoints including early mortality and major adverse events (MAEs), all-cause mortality, and aortic-related mortality (ARM), were analyzed and compared in patients who had non-elective versus elective repair.Results: A total of 2603 patients (69% males; mean age 72 +/- 10 year old) underwent FB-EVAR for TAAAs. Elective repair was performed in 2187 patients (84%) and non-elective repair in 416 patients [16%; 268 (64%) symptomatic, 148 (36%) ruptured]. Non-elective FB-EVAR was associated with higher early mortality (17% vs 5%, P < 0.001) and rates of MAEs (34% vs 20%, P < 0.001). Median follow-up was 15 months ( interquartile range, 7-37 months). Survival and cumulative incidence of ARM at 3 years were both lower for non-elective versus elective patients (50 +/- 4% vs 70 +/- 1% and 21 +/- 3% vs 7 +/- 1%, P < 0.001). On multivariable analysis, non-elective repair was associated with increased risk of all-cause mortality ( hazard ratio, 1.92; 95% CI] 1.50-2.44; P < 0.001) and ARM (hazard ratio, 2.43; 95% CI, 1.63-3.62; P < 0.001).Conclusions: Non-elective FB-EVAR of symptomatic or ruptured TAAAs is feasible, but carries higher incidence of early MAEs and increased all-cause mortality and ARM than elective repair. Long-term follow-up is warranted to justify the treatment.
  •  
2.
  • Dias-Neto, Marina, et al. (författare)
  • Comparison of single- and multistage strategies during fenestrated-branched endovascular aortic repair of thoracoabdominal aortic aneurysms
  • 2023
  • Ingår i: Journal of Vascular Surgery. - : MOSBY-ELSEVIER. - 0741-5214 .- 1097-6809. ; 77:6, s. 1588-1597
  • Tidskriftsartikel (refereegranskat)abstract
    • Objective: The aim of this study was to compare outcomes of single or multistage approach during fenestrated-branched endovascular aortic repair (FB-EVAR) of extensive thoracoabdominal aortic aneurysms (TAAAs).Methods: We reviewed the clinical data of consecutive patients treated by FB-EVAR for extent I to III TAAAs in 24 centers (2006-2021). All patients received a single brand manufactured patient-specific or off-the-shelf fenestrated-branched stent grafts. Staging strategies included proximal thoracic aortic repair, minimally invasive segmental artery coil embolization, temporary aneurysm sac perfusion and combinations of these techniques. Endpoints were analyzed for elective repair in patients who had a single-or multistage approach before and after propensity score adjustment for baseline differences, including the composite 30-day/in-hospital mortality and/or permanent paraplegia, major adverse event, patient survival, and freedom from aortic-related mortality.Results: A total of 1947 patients (65% male; mean age, 71 +/- 8 years) underwent FB-EVAR of 155 extent I (10%), 729 extent II (46%), and 713 extent III TAAAs (44%). A single-stage approach was used in 939 patients (48%) and a multistage approach in 1008 patients (52%). A multistage approach was more frequently used in patients undergoing elective compared with non-elective repair (55% vs 35%; P < .001). Staging strategies were proximal thoracic aortic repair in 743 patients (74%), temporary aneurysm sac perfusion in 128 (13%), minimally invasive segmental artery coil embolization in 10 (1%), and combinations in 127 (12%). Among patients undergoing elective repair (n = 1597), the composite endpoint of 30-day/in-hospital mortality and/or permanent paraplegia rate occurred in 14% of single-stage and 6% of multistage approach patients (P < .001). After adjustment with a propensity score, multistage approach was associated with lower rates of 30-day/in-hospital mortality and/or permanent paraplegia (odds ratio, 0.466; 95% confidence interval, 0.271-0.801; P = .006) and higher patient survival at 1 year (86.9 +/- 1.3% vs 79.6 +/- 1.7%) and 3 years (72.7 +/- 2.1% vs 64.2 +/- 2.3%; adjusted hazard ratio, 0.714; 95% confidence interval, 0.528-0.966; P = .029), compared with a single stage approach.Conclusions: Staging elective FB-EVAR of extent I to III TAAAs was associated with decreased risk of mortality and/or permanent paraplegia at 30 days or within hospital stay, and with higher patient survival at 1 and 3 years.
  •  
3.
  • Mendes, Fabiana, et al. (författare)
  • Insights on the relationship between decision-making style and personality in software engineering
  • 2021
  • Ingår i: Information and Software Technology. - : Elsevier B.V.. - 0950-5849 .- 1873-6025. ; 136
  • Tidskriftsartikel (refereegranskat)abstract
    • Context: Software development involves many activities, and decision making is an essential one. Various factors can impact a decision-making process, and by understanding such factors, one can improve the process. Since people are the ones making decisions, some human-related aspects are amongst those influencing factors. One such aspect is the decision maker's personality. Objective: This research investigates the relationship between decision-making style and personality within the context of software project development. Method: We conducted a survey in a population of Brazilian software engineers to gather data on their personality and decision-making style. Results: Data from 63 participants was gathered and resulted in the identification of seven statistically significant correlations between decision-making style and personality (personality factor and personality facets). Furthermore, we built a regression model in which decision-making style (DMS) was the response variable and personality factors the independent variables. The backward elimination procedure selected only agreeableness to explain 4.2% of DMS variation. The model accuracy was evaluated and deemed good enough. Regarding the moderation effect of demographic variables (age, educational level, experience, and role) on the relationship between DMS and Agreeableness, the analysis showed that only software engineers’ role has such effect. Conclusion: This paper contributes toward understanding the relationship between DMS and personality. Results show that the personality variable agreeableness can explain the variation in decision-making style. Furthermore, someone's role in a software development project can impact the strength of the relationship between DMS and agreeableness. © 2021 Elsevier B.V.
  •  
4.
  • Salleh, Norsaremah, et al. (författare)
  • A Systematic Mapping Study of Value-Based Software Engineering
  • 2019
  • Ingår i: EUROMICRO Conference Proceedings. - : Institute of Electrical and Electronics Engineers Inc.. - 9781728132853 ; , s. 404-411
  • Konferensbidrag (refereegranskat)abstract
    • Integrating value-oriented perspectives into the principles and practices of software engineering is critical to ensure that software development and management activities address all key stakeholders' views and also balance short-and-long-term goals. This is put forward in the discipline of Value-Based Software Engineering (VBSE). In this paper, a mapping study of VBSE is detailed. We classify evidence on VBSE principles and practices, research methods, and the research types. This mapping study includes 134 studies located from online searches, and backward snowballing of references. Our results show that VB Requirements Engineering (22%) and VB Planning and Control (19%) were the two principles and practices mostly investigated in the VBSE literature, whereas VB Risk Management, VB People Management and Value Creation (3% respectively) were the three less researched. In terms of the research method, the most commonly employed method is case-study research. In terms of research types, most of the studies (28%) proposed solution technique(s) without empirical validation. © 2019 IEEE.
  •  
5.
  • Salleh, Norsaremah, et al. (författare)
  • Value-based Software Engineering : A Systematic Mapping Study
  • 2023
  • Ingår i: e-Informatica Software Engineering Journal. - : Wroclaw University of Technology. - 1897-7979 .- 2084-4840. ; 17:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Integrating value-oriented perspectives into the principles and practices of software engineering is fundamental to ensure that software development activities address key stakeholders’ views and also balance short-and long-term goals. This is put forward in the discipline of value-based software engineering (VBSE) Aim: This study aims to provide an overview of VBSE with respect to the research efforts that have been put into VBSE. Method: We conducted a systematic mapping study to classify evidence on value definitions, studies’ quality, VBSE principles and practices, research topics, methods, types, contribution facets, and publication venues. Results: From 143 studies we found that the term “value” has not been clearly defined in many studies. VB Requirements Engineering and VB Planning and Control were the two principles mostly investigated, whereas VB Risk Management and VB People Management were the least researched. Most studies showed very good reporting and relevance quality, acceptable credibility, but poor in rigour. Main research topic was Software Requirements and case study research was the method used the most. The majority of studies contribute towards methods and processes, while very few studies have proposed metrics and tools. Conclusion: We highlighted the research gaps and implications for research and practice to support VBSE. © 2023 The Authors.
  •  
6.
  • Azhar, Damir, et al. (författare)
  • Using ensembles for web effort estimation
  • 2013
  • Konferensbidrag (refereegranskat)abstract
    • Background: Despite the number of Web effort estimation techniques investigated, there is no consensus as to which technique produces the most accurate estimates, an issue shared by effort estimation in the general software estimation domain. A previous study in this domain has shown that using ensembles of estimation techniques can be used to address this issue. Aim: The aim of this paper is to investigate whether ensembles of effort estimation techniques will be similarly successful when used on Web project data. Method: The previous study built ensembles using solo effort estimation techniques that were deemed superior. In order to identify these superior techniques two approaches were investigated: The first involved replicating the methodology used in the previous study, while the second approach used the Scott-Knott algorithm. Both approaches were done using the same 90 solo estimation techniques on Web project data from the Tukutuku dataset. The replication identified 16 solo techniques that were deemed superior and were used to build 15 ensembles, while the Scott-Knott algorithm identified 19 superior solo techniques that were used to build two ensembles. Results: The ensembles produced by both approaches performed very well against solo effort estimation techniques. With the replication, the top 12 techniques were all ensembles, with the remaining 3 ensembles falling within the top 17 techniques. These 15 effort estimation ensembles, along with the 2 built by the second approach, were grouped into the best cluster of effort estimation techniques by the Scott-Knott algorithm. Conclusion: While it may not be possible to identify a single best technique, the results suggest that ensembles of estimation techniques consistently perform well even when using Web project data
  •  
7.
  • Borg, Markus, et al. (författare)
  • Evaluation of Traceability Recovery in Context: A Taxonomy for Information Retrieval Tools
  • 2012
  • Ingår i: 16th International Conference on Evaluation & Assessment in Software Engineering (EASE 2012). - : IET. - 9781849195416 ; , s. 111-120
  • Konferensbidrag (refereegranskat)abstract
    • Background: Development of complex, software intensive systems generates large amounts of information. Several researchers have developed tools implementing information retrieval (IR) approaches to suggest traceability links among artifacts. Aim: We explore the consequences of the fact that a majority of the evaluations of such tools have been focused on benchmarking of mere tool output. Method: To illustrate this issue, we have adapted a framework of general IR evaluations to a context taxonomy specifically for IR-based traceability recovery. Furthermore, we evaluate a previously proposed experimental framework by conducting a study using two publicly available tools on two datasets originating from development of embedded software systems. Results: Our study shows that even though both datasets contain software artifacts from embedded development, the characteristics of the two datasets differ considerably, and consequently the traceability outcomes. Conclusions: To enable replications and secondary studies, we suggest that datasets should be thoroughly characterized in future studies on traceability recovery, especially when they can not be disclosed. Also, while we conclude that the experimental framework provides useful support, we argue that our proposed context taxonomy is a useful complement. Finally, we discuss how empirical evidence of the feasibility of IR-based traceability recovery can be strengthened in future research.
  •  
8.
  • Britto, Ricardo, et al. (författare)
  • A Specialized Global Software Engineering Taxonomy for Effort Estimation
  • 2016
  • Ingår i: International Conference on Global Software Engineering. - : IEEE Computer Society. - 9781509026807 ; , s. 154-163
  • Konferensbidrag (refereegranskat)abstract
    • To facilitate the sharing and combination of knowledge by Global Software Engineering (GSE) researchers and practitioners, the need for a common terminology and knowledge classification scheme has been identified, and as a consequence, a taxonomy and an extension were proposed. In addition, one systematic literature review and a survey on respectively the state of the art and practice of effort estimation in GSE were conducted, showing that despite its importance in practice, the GSE effort estimation literature is rare and reported in an ad-hoc way. Therefore, this paper proposes a specialized GSE taxonomy for effort estimation, which was built on the recently proposed general GSE taxonomy (including the extension) and was also based on the findings from two empirical studies and expert knowledge. The specialized taxonomy was validated using data from eight finished GSE projects. Our effort estimation taxonomy for GSE can help both researchers and practitioners by supporting the reporting of new GSE effort estimation studies, i.e. new studies are to be easier to identify, compare, aggregate and synthesize. Further, it can also help practitioners by providing them with an initial set of factors that can be considered when estimating effort for GSE projects.
  •  
9.
  • Britto, Ricardo, et al. (författare)
  • A TAXONOMY OF WEB EFFORT PREDICTORS
  • 2017
  • Ingår i: Journal of Web Engineering. - : Rinton Press. - 1540-9589 .- 1544-5976. ; 16:7-8, s. 541-570
  • Tidskriftsartikel (refereegranskat)abstract
    • Web engineering as a field has emerged to address challenges associated with developing Web applications. It is known that the development of Web applications differs from the development of non-Web applications, especially regarding some aspects such as Web size metrics. The classification of existing Web engineering knowledge would be beneficial for both practitioners and researchers in many different ways, such as finding research gaps and supporting decision making. In the context of Web effort estimation, a taxonomy was proposed to classify the existing size metrics, and more recently a systematic literature review was conducted to identify aspects related to Web resource/effort estimation. However, there is no study that classifies Web predictors (both size metrics and cost drivers). The main objective of this study is to organize the body of knowledge on Web effort predictors by designing and using a taxonomy, aiming at supporting both research and practice in Web effort estimation. To design our taxonomy, we used a recently proposed taxonomy design method. As input, we used the results of a previously conducted systematic literature review (updated in this study), an existing taxonomy of Web size metrics and expert knowledge. We identified 165 unique Web effort predictors from a final set of 98 primary studies; they were used as one of the basis to design our hierarchical taxonomy. The taxonomy has three levels, organized into 13 categories. We demonstrated the utility of the taxonomy and body of knowledge by using examples. The proposed taxonomy can be beneficial in the following ways: i) It can help to identify research gaps and some literature of interest and ii) it can support the selection of predictors for Web effort estimation. We also intend to extend the taxonomy presented to also include effort estimation techniques and accuracy metrics.
  •  
10.
  • Britto, Ricardo, 1982-, et al. (författare)
  • An Empirical Investigation on Effort Estimation in Agile Global Software Development
  • 2015
  • Ingår i: Proceedings of the 2015 IEEE 10th International Conference on Global Software Engineering. - 9781479984091 ; , s. 38-45
  • Konferensbidrag (refereegranskat)abstract
    • Effort estimation is a project management activity that is mandatory for the execution of softwareprojects. Despite its importance, there have been just a few studies published on such activities within the Agile Global Software Development (AGSD) context. Their aggregated results were recently published as part of a secondary study that reported the state of the art on effort estimationin AGSD. This study aims to complement the above-mentioned secondary study by means of anempirical investigation on the state of the practice on effort estimation in AGSD. To do so, a survey was carried out using as instrument an on-line questionnaire and a sample comprising softwarepractitioners experienced in effort estimation within the AGSD context. Results show that the effortestimation techniques used within the AGSD and collocated contexts remained unchanged, with planning poker being the one employed the most. Sourcing strategies were found to have no or a small influence upon the choice of estimation techniques. With regard to effort predictors, globalchallenges such as cultural and time zone differences were reported, in addition to factors that are commonly considered in the collocated context, such as team experience. Finally, many challenges that impact the accuracy of the effort estimates were reported by the respondents, such as problems with the software requirements and the fact that the communication effort between sites is not properly accounted.
  •  
11.
  • Britto, Ricardo, et al. (författare)
  • An Extended Global Software Engineering Taxonomy
  • 2016
  • Ingår i: Journal of Software Engineering Research and Development. - : Springer. - 2195-1721. ; 4:3
  • Tidskriftsartikel (refereegranskat)abstract
    • In Global Software Engineering (GSE), the need for a common terminology and knowledge classification has been identified to facilitate the sharing and combination of knowledge by GSE researchers and practitioners. A GSE taxonomy was recently proposed to address such a need, focusing on a core set of dimensions; however its dimensions do not represent an exhaustive list of relevant GSE factors. Therefore, this study extends the existing taxonomy, incorporating new GSE dimensions that were identified by means of two empirical studies conducted recently.
  •  
12.
  • Britto, Ricardo, 1982-, et al. (författare)
  • Effort Estimation in Agile Global Software Development Context
  • 2014
  • Ingår i: Agile Methods. Large-Scale Development, Refactoring, Testing, and Estimation. - Cham : Springer. - 9783319143583 ; , s. 182-192
  • Konferensbidrag (refereegranskat)abstract
    • Both Agile Software Development (ASD) and Global Software Development (GSD) are 21st century trends in the software industry. Many studies are reported in the literature wherein software companies have applied an agile method or practice GSD. Given that effort estimation plays a remarkable role in software project management, how do companies perform effort estimation when they use agile method in a GSD context? Based on two effort estimation Systematic Literature Reviews (SLR) - one in within the ASD context and the other in a GSD context, this paper reports a study in which we combined the results of these SLRs to report the state of the art of effort estimation in agile global software development (ASD) context.
  •  
13.
  • Britto, Ricardo, 1982-, et al. (författare)
  • Effort Estimation in Global Software Development: A systematic Literature Review
  • 2014
  • Ingår i: Proceedings of the 2014 9th IEEE International Conference on Global Software Engineering. - 9781479943609 ; , s. 135-144
  • Konferensbidrag (refereegranskat)abstract
    • Nowadays, software systems are a key factor in the success of many organizations as in most cases they play a central role helping them attain a competitive advantage. However, despite their importance, software systems may be quite costly to develop, so substantially decreasing companies’ profits. In order to tackle this challenge, many organizations look for ways to decrease costs and increase profits by applying new software development approaches, like Global Software Development (GSD). Some aspects of the software project like communication, cooperation and coordination are more chal- lenging in globally distributed than in co-located projects, since language, cultural and time zone differences are factors which can increase the required effort to globally perform a software project. Communication, coordination and cooperation aspects affect directly the effort estimation of a project, which is one of the critical tasks related to the management of a software development project. There are many studies related to effort estimation methods/techniques for co-located projects. However, there are evidences that the co-located approaches do not fit to GSD. So, this paper presents the results of a systematic literature review of effort estimation in the context of GSD, which aimed at help both researchers and practitioners to have a holistic view about the current state of the art regarding effort estimation in the context of GSD. The results suggest that there is room to improve the current state of the art on effort estimation in GSD. 
  •  
14.
  • Dallora Moraes, Ana Luiza, et al. (författare)
  • A decision tree multifactorial approach for predicting dementia in a 10 years’ time
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Background: Dementia is a complex neurological disorder, to which little is known about its mechanisms and no therapeutic treatment was identified, to date, to revert or alleviate its symptoms. It affects the older adults population causing a progressive cognitive decline that can become severe enough to impair the individuals' independence and functioning. In this scenario, the prognosis research, directed to identify modifiable risk factors in order to delay or prevent its development, in a big enough time frame is substantially important.Objective: This study investigates a decision tree multifactorial approach for the prognosis of dementia of individuals, not diagnosed with this disorder at baseline, and their development (or not) of dementia in a time frame of 10 years. Methods: This study retrieved data from the Swedish National Study on Aging and Care, which consisted of 726 subjects (313 males and 413 females), of which 91 presented a diagnosis of dementia at the 10-year study mark. A K-nearest neighbors multiple imputation method was employed to handle the missing data. A wrapper feature selection was employed to select the best features in a set of 75 variables, which considered factors related to demographic, social, lifestyle, medical history, biochemical test, physical examination, psychological assessment and diverse health instruments relevant to dementia evaluation. Lastly, a cost-sensitive decision tree approach was employed in order to build predictive models in an stratified nested cross-validation experimental setup.Results: The proposed approach achieved an AUC of 0.745 and Recall of 0.722 for the 10-year prognosis of dementia. Our findings showed that most of the variables selected by the tree are related to modifiable risk factors, of which physical strength was an important factor across all ages of the sample. Also, there was a lack of variables related to the health instruments routinely used for the dementia diagnosis that might not be sensitive enough to predict dementia in a 10 years’ time.Conclusions: The proposed model identified diverse modifiable factors, in a 10 years’ time from diagnosis, that could be investigated for possible interventions in order to delay or prevent the dementia onset. 
  •  
15.
  • Dallora Moraes, Ana Luiza, et al. (författare)
  • Bone age assessment with various machine learning techniques : A systematic literature review and meta-analysis
  • 2019
  • Ingår i: PLOS ONE. - : Public Library of Science. - 1932-6203. ; 14:7
  • Forskningsöversikt (refereegranskat)abstract
    • Background The assessment of bone age and skeletal maturity and its comparison to chronological age is an important task in the medical environment for the diagnosis of pediatric endocrinology, orthodontics and orthopedic disorders, and legal environment in what concerns if an individual is a minor or not when there is a lack of documents. Being a time-consuming activity that can be prone to inter- and intra-rater variability, the use of methods which can automate it, like Machine Learning techniques, is of value. Objective The goal of this paper is to present the state of the art evidence, trends and gaps in the research related to bone age assessment studies that make use of Machine Learning techniques. Method A systematic literature review was carried out, starting with the writing of the protocol, followed by searches on three databases: Pubmed, Scopus and Web of Science to identify the relevant evidence related to bone age assessment using Machine Learning techniques. One round of backward snowballing was performed to find additional studies. A quality assessment was performed on the selected studies to check for bias and low quality studies, which were removed. Data was extracted from the included studies to build summary tables. Lastly, a meta-analysis was performed on the performances of the selected studies. Results 26 studies constituted the final set of included studies. Most of them proposed automatic systems for bone age assessment and investigated methods for bone age assessment based on hand and wrist radiographs. The samples used in the studies were mostly comprehensive or bordered the age of 18, and the data origin was in most of cases from United States and West Europe. Few studies explored ethnic differences. Conclusions There is a clear focus of the research on bone age assessment methods based on radiographs whilst other types of medical imaging without radiation exposure (e.g. magnetic resonance imaging) are not much explored in the literature. Also, socioeconomic and other aspects that could influence in bone age were not addressed in the literature. Finally, studies that make use of more than one region of interest for bone age assessment are scarce. Copyright: © 2019 Dallora et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
  •  
16.
  • Felizardo, Katia, et al. (författare)
  • Defining protocols of systematic literature reviews in software engineering : A survey
  • 2017
  • Ingår i: Proceedings - 43rd Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2017. - : Institute of Electrical and Electronics Engineers Inc.. - 9781538621400 ; , s. 202-209
  • Konferensbidrag (refereegranskat)abstract
    • Context: Despite being defined during the first phase of the Systematic Literature Review (SLR) process, the protocol is usually refined when other phases are performed. Several researchers have reported their experiences in applying SLRs in Software Engineering (SE) however, there is still a lack of studies discussing the iterative nature of the protocol definition, especially how it should be perceived by researchers conducting SLRs. Objective: The main goal of this study is to perform a survey aiming to identify: (i) the perception of SE researchers related to protocol definition; (ii) the activities of the review process that typically lead to protocol refinements; and (iii) which protocol items are refined in those activities. Method: A survey was performed with 53 SE researchers. Results: Our results show that: (i) protocol definition and pilot test are the two activities that most lead to further protocol refinements; (ii) data extraction form is the most modified item. Besides that, this study confirmed the iterative nature of the protocol definition. Conclusions: An iterative pilot testcan facilitate refinements in the protocol. © 2017 IEEE.
  •  
17.
  • Felizardo, Katia Romero, et al. (författare)
  • Using Forward Snowballing to update Systematic Reviews in Software Engineering
  • 2016
  • Ingår i: ESEM'16. - New York, NY, USA : ASSOC COMPUTING MACHINERY.
  • Konferensbidrag (refereegranskat)abstract
    • Background: A Systematic Literature Review (SLR) is a methodology used to aggregate relevant evidence related to one or more research questions. Whenever new evidence is published after the completion of a SLR, this SLR should be updated in order to preserve its value. However, updating SLRs involves significant effort. Objective: The goal of this paper is to investigate the application of forward snowballing to support the update of SLRs. Method: We compare outcomes of an update achieved using the forward snowballing versus a published update using the search-based approach, i.e., searching for studies in electronic databases using a search string. Results: Forward snowballing showed a higher precision and a slightly lower recall. It reduced in more than five times the number of primary studies to filter however missed one relevant study. Conclusions: Due to its high precision, we believe that the use of forward snowballing considerably reduces the effort in updating SLRs in Software Engineering; however the risk of missing relevant papers should not be underrated.
  •  
18.
  • Felizardo, Katia Romero, et al. (författare)
  • Using visual text mining to support the study selection activity in systematic literature reviews
  • 2011
  • Ingår i: 2011 Fifth International Symposium on Empirical Software Engineering and Measurement, ESEM 2011. - Washington : IEEE. - 9781457722035 - 9780769546049 ; , s. 77-86
  • Konferensbidrag (refereegranskat)abstract
    • Background: A systematic literature review (SLR) is a methodology used to aggregate all relevant existing evidence to answer a research question of interest. Although crucial, the process used to select primary studies can be arduous, time consuming, and must often be conducted manually.Objective: We propose a novel approach, known as 'Systematic Literature Review based on Visual Text Mining' or simply SLR-VTM, to support the primary study selection activity using visual text mining (VTM) techniques. Method: We conducted a case study to compare the performance and effectiveness of four doctoral students in selecting primary studies manually and using the SLR-VTM approach. To enable the comparison, we also developed a VTM tool that implemented our approach. We hypothesized that students using SLR-VTM would present improved selection performance and effectiveness.Results: Our results show that incorporating VTM in the SLR study selection activity reduced the time spent in this activity and also increased the number of studies correctly included.Conclusions: Our pilot case study presents promising results suggesting that the use of VTM may indeed be beneficial during the study selection activity when performing an SLR.
  •  
19.
  • Guimarães, Gleyser, et al. (författare)
  • Investigating the relationship between personalities and agile team climate : A replicated study
  • 2024
  • Ingår i: Information and Software Technology. - : Elsevier. - 0950-5849 .- 1873-6025. ; 169
  • Tidskriftsartikel (refereegranskat)abstract
    • Context: A study in 2020 (S1) explored the relationship between personality traits and team climate perceptions of software professionals working in agile teams. S1 surveyed 43 software professionals from a large telecom company in Sweden and found that a person's ability to get along with team members (Agreeableness) influences significantly and positively the perceived level of team climate. Further, they observed that personality traits accounted for less than 15 % of the variance in team climate. Objective: The study described herein replicates S1 using data gathered from 148 software professionals from an industrial partner in Brazil. Method: We used the same research methods as S1. We employed a survey to gather the personality and climate data, which was later analyzed using correlation and regression analyses. The former aimed to measure the level of association between personality traits and climate and the latter to estimate team climate factors using personality traits as predictors. Results: The results for the correlation analyses showed statistically significant and positive associations between two personality traits - Agreeableness and Conscientiousness, and all five team climate factors. There was also a significant and positive association between Openness and Team Vision. Our results corroborate those from S1, with respect to two personality traits – Openness and Agreeableness; however, in S1, Openness was significantly and positively associated with Support for Innovation (not Team Vision). In regard to Agreeableness, in S1 it was also significantly and positively associated with perceived team climate. Furthermore, our regression models also support S1’s findings - personality traits accounted for less than 15 % of the variance in team climate. Conclusion: Despite variances in location, sample size, and operational domain, our study confirmed S1′s results on the limited influence of personality traits. Agreeableness and Openness were significant predictors for team climate, although the predictive factors differed. These discrepancies highlight the necessity for further research, incorporating larger samples and additional predictor variables, to better comprehend the intricate relationship between personality traits and team climate across diverse cultural and professional settings. © 2024 Elsevier B.V.
  •  
20.
  • Kalinowski, Marcos, et al. (författare)
  • An industry ready defect causal analysis approach exploring Bayesian networks
  • 2014
  • Ingår i: Lecture Notes in Business Information Processing. - Vienna : Springer. - 9783319036021 ; , s. 12-33
  • Konferensbidrag (refereegranskat)abstract
    • Defect causal analysis (DCA) has shown itself an efficient means to improve the quality of software processes and products. A DCA approach exploring Bayesian networks, called DPPI (Defect Prevention-Based Process Improvement), resulted from research following an experimental strategy. Its conceptual phase considered evidence-based guidelines acquired through systematic reviews and feedback from experts in the field. Afterwards, in order to move towards industry readiness the approach evolved based on results of an initial proof of concept and a set of primary studies. This paper describes the experimental strategy followed and provides an overview of the resulting DPPI approach. Moreover, it presents results from applying DPPI in industry in the context of a real software development lifecycle, which allowed further comprehension and insights into using the approach from an industrial perspective.
  •  
21.
  • Kocaguneli, Ekrem, et al. (författare)
  • Transfer learning in effort estimation
  • 2015
  • Ingår i: Empirical Software Engineering. - : Springer. - 1382-3256 .- 1573-7616. ; 20:3, s. 813-843
  • Tidskriftsartikel (refereegranskat)abstract
    • When projects lack sufficient local data to make predictions, they try to transfer information from other projects. How can we best support this process? In the field of software engineering, transfer learning has been shown to be effective for defect prediction. This paper checks whether it is possible to build transfer learners for software effort estimation. We use data on 154 projects from 2 sources to investigate transfer learning between different time intervals and 195 projects from 51 sources to provide evidence on the value of transfer learning for traditional cross-company learning problems. We find that the same transfer learning method can be useful for transfer effort estimation results for the cross-company learning problem and the cross-time learning problem. It is misguided to think that: (1) Old data of an organization is irrelevant to current context or (2) data of another organization cannot be used for local solutions. Transfer learning is a promising research direction that transfers relevant cross data between time intervals and domains.
  •  
22.
  • Lokan, Chris, et al. (författare)
  • Investigating the use of duration-based moving windows to improve software effort prediction
  • 2012
  • Konferensbidrag (refereegranskat)abstract
    • To date most research in software effort estimation has not taken into account any form of chronological split when selecting projects for training and testing sets. A chronological split represents the use of a project's starting and completion dates, such that any model that estimates effort for a new project p only uses as its training set projects that were completed prior to p's starting date. Three recent studies investigated the use of chronological splits, using a type of chronological split called a moving window, which represented a subset of the most recent projects completed prior to a project p's starting date. They found some evidence in favour of using windows whenever projects were recent. These studies all defined window sizes as being fixed numbers of recent projects. In practice, we suggest that estimators are more likely to think in terms of elapsed time than the size of the data set, when deciding which projects to include in a training set. Therefore, this paper investigates the effect on accuracy when using moving windows of various durations to form training sets on which to base effort estimates. Our results show that the use of windows based on duration can affect the accuracy of estimates (in this data set, a window of about three years duration appears best), but to a lesser extent than windows based on a fixed number of projects
  •  
23.
  • Lokan, Chris, et al. (författare)
  • Investigating the use of duration-based moving windows to improve software effort prediction : A replicated study
  • 2014
  • Ingår i: Information and Software Technology. - : Elsevier. - 0950-5849 .- 1873-6025. ; 56:9, s. 1063-1075
  • Tidskriftsartikel (refereegranskat)abstract
    • Context: Most research in software effort estimation has not considered chronology when selecting projects for training and testing sets. A chronological split represents the use of a projects starting and completion dates, such that any model that estimates effort for a new project p only uses as training data projects that were completed prior to p's start. Four recent studies investigated the use of chronological splits, using moving windows wherein only the most recent projects completed prior to a projects starting date were used as training data. The first three studies (S1-S3) found some evidence in favor of using windows; they all defined window sizes as being fixed numbers of recent projects. In practice, we suggest that estimators think in terms of elapsed time rather than the size of the data set, when deciding which projects to include in a training set. In the fourth study (S4) we showed that the use of windows based on duration can also improve estimation accuracy. Objective: This papers contribution is to extend S4 using an additional dataset, and to also investigate the effect on accuracy when using moving windows of various durations. Method: Stepwise multivariate regression was used to build prediction models, using all available training data, and also using windows of various durations to select training data. Accuracy was compared based on absolute residuals and MREs; the Wilcoxon test was used to check statistical significances between results. Accuracy was also compared against estimates derived from windows containing fixed numbers of projects. Results: Neither fixed size nor fixed duration windows provided superior estimation accuracy in the new data set. Conclusions: Contrary to intuition, our results suggest that it is not always beneficial to exclude old data when estimating effort for new projects. When windows are helpful, windows based on duration are effective.
  •  
24.
  • Lokan, Chris, et al. (författare)
  • Investigating the use of moving windows to improve software effort prediction : a replicated study
  • 2017
  • Ingår i: Empirical Software Engineering. - : Springer-Verlag New York. - 1382-3256 .- 1573-7616. ; 22:2, s. 716-767
  • Tidskriftsartikel (refereegranskat)abstract
    • To date most research in software effort estimation has not taken chronology into account when selecting projects for training and validation sets. A chronological split represents the use of a project’s starting and completion dates, such that any model that estimates effort for a new project p only uses as its training set projects that have been completed prior to p’s starting date. A study in 2009 (“S3”) investigated the use of chronological split taking into account a project’s age. The research question investigated was whether the use of a training set containing only the most recent past projects (a “moving window” of recent projects) would lead to more accurate estimates when compared to using the entire history of past projects completed prior to the starting date of a new project. S3 found that moving windows could improve the accuracy of estimates. The study described herein replicates S3 using three different and independent data sets. Estimation models were built using regression, and accuracy was measured using absolute residuals. The results contradict S3, as they do not show any gain in estimation accuracy when using windows for effort estimation. This is a surprising result: the intuition that recent data should be more helpful than old data for effort estimation is not supported. Several factors, which are discussed in this paper, might have contributed to such contradicting results. Some of our future work entails replicating this work using other datasets, to understand better when using windows is a suitable choice for software companies.
  •  
25.
  • Manzano, Martí, et al. (författare)
  • A Method to Estimate Software Strategic Indicators in Software Development : An Industrial Application
  • 2021
  • Ingår i: Information and Software Technology. - : Elsevier B.V.. - 0950-5849 .- 1873-6025. ; 129
  • Tidskriftsartikel (refereegranskat)abstract
    • Context: Exploiting software development related data from software-development intensive organizations to support tactical and strategic decision making is a challenge. Combining data-driven approaches with expert knowledge has been highlighted as a sensible approach for leading software-development intensive organizations to rightful decision-making improvements. However, most of the existing proposals lack of important aspects that hinders their industrial uptake such as: customization guidelines to fit the proposals to other contexts and/or automatic or semi-automatic data collection support for putting them forward in a real organization. As a result, existing proposals are rarely used in the industrial context. Objective: Support software-development intensive organizations with guidance and tools for exploiting software development related data and expert knowledge to improve their decision making. Method: We have developed a novel method called SESSI (Specification and Estimation of Software Strategic Indicators) that was articulated from industrial experiences with Nokia, Bittium, Softeam and iTTi in the context of Q-Rapids European project following a design science approach. As part of the industrial summative evaluation, we performed the first case study focused on the application of the method. Results: We detail the phases and steps of the SESSI method and illustrate its application in the development of ModelioNG, a software product of Modeliosoft development firm. Conclusion: The application of the SESSI method in the context of ModelioNG case study has provided us with useful feedback to improve the method and has evidenced that applying the method was feasible in this context. © 2020 Elsevier B.V.
  •  
26.
  • Mendes, Emilia, et al. (författare)
  • An Expert-Based Requirements Effort Estimation Model Using Bayesian Networks
  • 2016
  • Ingår i: SOFTWARE QUALITY. - Cham : Springer. - 9783319270326 - 9783319270333 ; , s. 79-93
  • Konferensbidrag (refereegranskat)abstract
    • [Motivation]: There are numerous software companies worldwide that split the software development life cycle into at least two separate projects an initial project where a requirements specification document is prepared; and a follow-up project where the previously prepared requirements document is used as input to developing a software application. These follow-up projects can also be delegated to a third party, as occurs in numerous global software development scenarios. Effort estimation is one of the cornerstones of any type of project management; however, a systematic literature review on requirements effort estimation found hardly any empirical study investigating this topic. [Objective]: The goal of this paper is to describe an industrial case study where an expert-based requirements effort estimation model was built and validated for the Brazilian Navy. [Method]: A knowledge engineering of Bayesian networks process was employed to build the requirements effort estimation model. [Results]: The expert-based requirements effort estimation model was built with the participation of seven software requirements analysts and project managers, leading to 28 prediction factors and 30+ relationships. The model was validated based on real data from 11 large requirements specification projects. The model was incorporated into the Brazilian navy's quality assurance process to be used by their software requirements analysts and managers. [Conclusion]: This paper details a case study where an expert-based requirements effort estimation model based solely on knowledge from requirements analysts and project managers was successfully built to help the Brazilian Navy estimate the requirements effort for their projects.
  •  
27.
  • Mendes, Emilia (författare)
  • Applying a knowledge management technique to improve risk assessment and effort estimation of healthcare software projects
  • 2014
  • Ingår i: Communications in Computer and Information Science. - Berlin, Heidelberg : Springer International Publishing. - 1865-0929 .- 1865-0937. ; 457, s. 40-56
  • Tidskriftsartikel (refereegranskat)abstract
    • One of the pillars for sound Software Project Management is reliable effort estimation. Therefore it is important to fully identify what are the fundamental factors that affect an effort estimate for a new project and how these factors are inter-related. This paper describes a case study where a Knowledge Management technique was employed to build an expert-based effort estimation model to estimate effort for healthcare software projects. This model was built with the participation of seven project managers, and was validated using data from 22 past finished projects. The model led to numerous changes in process and also in business. The company adapted their existing effort estimation process to be in line with the model that was created, and the use of a mathematically- based model also led to an increase in the number of projects being delegated to this company by other company branches worldwide.
  •  
28.
  •  
29.
  • Mendes, Emilia, et al. (författare)
  • Cross- vs. Within-company cost estimation studies revisited : An extended systematic review
  • 2014
  • Konferensbidrag (refereegranskat)abstract
    • The objective of this paper is to extend a previously conducted systematic literature review (SLR) that investigated under what circumstances individual organizations would be able to rely on cross-company based estimation models. [Method] We applied the same methodology used in the SLR we are extending herein (covering the period 2006-2013) based on primary studies that compared predictions from cross-company models with predictions from within-company models constructed from analysis of project data. [Results] We identified 11 additional papers; however two of these did not present independent results and one had inconclusive findings. Two of the remaining eight papers presented both, trials where cross-company predictions were not significantly different from within-company predictions and others where they were significantly different. Four found that cross-company models gave prediction accuracy significantly different from within-company models (one of them in favor of cross-company models), while two found no significant difference. The main pattern when examining the study related factors was that studies where cross-company predictions were significantly different from within-company predictions employed larger within-company data sets. [Conclusions] Overall, half of the analyzed evidence indicated that cross-company estimation models are not significantly worse than within-company estimation models. Moreover, there is some evidence that sample size does not imply in higher estimation accuracy, and that samples for building estimation models should be carefully selected/filtered based on quality control and project similarity aspects. The results need to be combined with the findings from the SLR we are extending to allow further investigating this topic.
  •  
30.
  • Mendes, Emilia (författare)
  • Improving Software Effort Estimation Using an Expert-centred Approach
  • 2012
  • Ingår i: Lecture Notes in Computer Science. - Berlin, Heidelberg : Springer. ; , s. 18-33
  • Konferensbidrag (refereegranskat)abstract
    • A cornerstone of software project management is effort estimation, the process by which effort is forecasted and used as basis to predict costs and allocate resources effectively, so enabling projects to be delivered on time and within budget. Effort estimation is a very complex domain where the relationship between factors is non-deterministic and has an inherently uncertain nature, and where corresponding decisions and predictions require reasoning with uncertainty. Most studies in this field, however, have to date investigated ways to improve software effort estimation by proposing and comparing techniques to build effort prediction models where such models are built solely from data on past software projects - data-driven models. The drawback with such approach is threefold: first, it ignores the explicit inclusion of uncertainty, which is inherent to the effort estimation domain, into such models; second, it ignores the explicit representation of causal relationships between factors; third, it relies solely on the variables being part of the dataset used for model building, under the assumption that those variables represent the fundamental factors within the context of software effort prediction. Recently, as part of a New Zealand and later on Brazilian government-funded projects, we investigated the use of an expert-centred approach in combination with a technique that enables the explicit inclusion of uncertainty and causal relationships as means to improve software effort estimation. This paper will first provide an overview of the effort estimation process, followed by the discussion of how an expert-centred approach to improving such process can be advantageous to software companies. In addition, we also detail our experience building and validating six different expert-based effort estimation models for ICT companies in New Zealand and Brazil. Post-mortem interviews with the participating companies showed that they found the entire process extremely beneficial and worthwhile, and that all the models created remained in use by those companies. Finally, the methodology focus of this paper, which focuses on expert knowledge elicitation and participation, can be employed not only to improve a software effort estimation process, but also to improve other project management-related activities.
  •  
31.
  • Mendes, Emilia, et al. (författare)
  • Realising Individual and Team Capability in Agile Software Development : A Qualitative Investigation
  • 2018
  • Ingår i: Proceedings - 44th Euromicro Conference on Software Engineering and Advanced Applications, SEAA 2018. - : IEEE. - 9781538673836 ; , s. 183-190
  • Konferensbidrag (refereegranskat)abstract
    • Several studies have shown that both individual and team capability can affect software development performance and project success; a deeper understating of such phenomena is crucial within the context of Agile Software Development (ASD), given that its workforce is a key source of agility. This paper contributes towards such understanding by means of a case study that uses data from 14 interviews carried out at a large telecommunications company, within the context of a mobile money transfer system developed in Sweden and India, to identify individual and team capability measures used to form productive teams. Our results identified 10 individual and five team capability measures, of which, respectively, five and four have not been previously characterised by a systematic literature review (SLR) on this same topic. Such review aggregated evidence for a total of 133 individual and 28 team capability measures. Further work entails extending our findings via interviewing other software/software-intensive industries practicing ASD.
  •  
32.
  • Mendes, Emilia, et al. (författare)
  • Search Strategy to Update Systematic Literature Reviews in Software Engineering
  • 2019
  • Ingår i: EUROMICRO Conference Proceedings. - : Institute of Electrical and Electronics Engineers Inc.. - 9781728132853 ; , s. 355-362
  • Konferensbidrag (refereegranskat)abstract
    • [Context] Systematic Literature Reviews (SLRs) have been adopted within the Software Engineering (SE) domain for more than a decade to provide meaningful summaries of evidence on several topics. Many of these SLRs are now outdated, and there are no standard proposals on how to update SLRs in SE. [Objective] The goal of this paper is to provide recommendations on how to best to search for evidence when updating SLRs in SE. [Method] To achieve our goal, we compare and discuss outcomes from applying different search strategies to identifying primary studies in a previously published SLR update on effort estimation. [Results] The use of a single iteration forward snowballing with Google Scholar, and employing the original SLR and its primary studies as a seed set seems to be the most cost-effective way to search for new evidence when updating SLRs. [Conclusions] The recommendations can be used to support decisions on how to update SLRs in SE. © 2019 IEEE.
  •  
33.
  •  
34.
  •  
35.
  • Mendes, Emilia, et al. (författare)
  • Towards improving decision making and estimating the value of decisions in value-based software engineering : the VALUE framework
  • 2018
  • Ingår i: Software quality journal. - : Springer-Verlag New York. - 0963-9314 .- 1573-1367. ; 26:2, s. 607-656
  • Tidskriftsartikel (refereegranskat)abstract
    • To sustain growth, maintain competitive advantage and to innovate, companies must make a paradigm shift in which both short- and long-term value aspects are employed to guide their decision-making. Such need is clearly pressing in innovative industries, such as ICT, and is also the core of Value-based Software Engineering (VBSE). The goal of this paper is to detail a framework called VALUE—improving decision-making relating to software-intensive products and services development—and to show its application in practice to a large ICT company in Finland. The VALUE framework includes a mixed-methods approach, as follows: to elicit key stakeholders’ tacit knowledge regarding factors used during a decision-making process, either transcripts from interviews with key stakeholders are analysed and validated in focus group meetings or focus-group meeting(s) are directly applied. These value factors are later used as input to a Web-based tool (Value tool) employed to support decision making. This tool was co-created with four industrial partners in this research via a design science approach that includes several case studies and focus-group meetings. Later, data on key stakeholders’ decisions gathered using the Value tool, plus additional input from key stakeholders, are used, in combination with the Expert-based Knowledge Engineering of Bayesian Network (EKEBN) process, coupled with the weighed sum algorithm (WSA) method, to build and validate a company-specific value estimation model. The application of our proposed framework to a real case, as part of an ongoing collaboration with a large software company (company A), is presented herein. Further, we also provide a detailed example, partially using real data on decisions, of a value estimation Bayesian network (BN) model for company A. This paper presents some empirical results from applying the VALUE Framework to a large ICT company; those relate to eliciting key stakeholders’ tacit knowledge, which is later used as input to a pilot study where these stakeholders employ the Value tool to select features for one of their company’s chief products. The data on decisions obtained from this pilot study is later applied to a detailed example on building a value estimation BN model for company A. We detail a framework—VALUE framework—to be used to help companies improve their value-based decisions and to go a step further and also estimate the overall value of each decision. © 2017 The Author(s)
  •  
36.
  • Mendes, Emilia, et al. (författare)
  • Using Bayesian Network to Estimate the Value of Decisions within the Context of Value-Based Software Engineering : A Multiple Case Study
  • 2019
  • Ingår i: International journal of software engineering and knowledge engineering. - : World Scientific Publishing Co. Pte Ltd. - 0218-1940. ; 29:11-12, s. 1629-1671
  • Tidskriftsartikel (refereegranskat)abstract
    • Companies must make a paradigm shift in which both short- and long-term value aspects are employed to guide their decision-making. Such need is pressing in innovative industries, such as ICT, and is the core of Value-based Software Engineering (VBSE). Objective: This paper details three case studies where value estimation models using Bayesian Network (BN) were built and validated. These estimation models were based upon value-based decisions made by key stakeholders in the contexts of feature selection, test cases execution prioritization, and user interfaces design selection. Methods: All three case studies were carried out according to a Framework called VALUE - improVing decision-mAking reLating to software-intensive prodUcts and sErvices development. This framework includes a mixed-methods approach, comprising several steps to build and validate company-specific value estimation models. Such a building process uses as input data key stakeholders' decisions (gathered using the Value tool), plus additional input from key stakeholders. Results: Three value estimation BN models were built and validated, and the feedback received from the participating stakeholders was very positive. Conclusions: We detail the building and validation of three value estimation BN models, using a combination of data from past decision-making meetings and also input from key stakeholders. © 2019 World Scientific Publishing Company.
  •  
37.
  • Mendes, Emilia (författare)
  • Using expert-based bayesian networks as decision support systems to improve project management of healthcare software projects
  • 2013
  • Konferensbidrag (refereegranskat)abstract
    • One of the pillars for sound Software Project Management is reliable effort estimation. Therefore it is important to fully identify what are the fundamental factors that affect an effort estimate for a new project and how these factors are inter-related. This paper describes a case study where a Bayesian Network model to estimate effort for healthcare software projects was built. This model was solely elicited from expert knowledge, with the participation of seven project managers, and was validated using data from 22 past finished projects. The model led to numerous changes in process and also in business. The company adapted their existing effort estimation process to be in line with the model that was created, and the use of a mathematically-based model also led to an increase in the number of projects being delegated to this company by other company branches worldwide.
  •  
38.
  • Mendes, Emilia, et al. (författare)
  • When to update systematic literature reviews in software engineering
  • 2020
  • Ingår i: Journal of Systems and Software. - : Elsevier Inc.. - 0164-1212 .- 1873-1228. ; 167
  • Tidskriftsartikel (refereegranskat)abstract
    • [Context] Systematic Literature Reviews (SLRs) have been adopted by the Software Engineering (SE) community for approximately 15 years to provide meaningful summaries of evidence on several topics. Many of these SLRs are now potentially outdated, and there are no systematic proposals on when to update SLRs in SE. [Objective] The goal of this paper is to provide recommendations on when to update SLRs in SE. [Method] We evaluated, using a three-step approach, a third-party decision framework (3PDF) employed in other fields, to decide whether SLRs need updating. First, we conducted a literature review of SLR updates in SE and contacted the authors to obtain their feedback relating to the usefulness of the 3PDF within the context of SLR updates in SE. Second, we used these authors’ feedback to see whether the framework needed any adaptation; none was suggested. Third, we applied the 3PDF to the SLR updates identified in our literature review. [Results] The 3PDF showed that 14 of the 20 SLRs did not need updating. This supports the use of a decision support mechanism (such as the 3PDF) to help the SE community decide when to update SLRs. [Conclusions] We put forward that the 3PDF should be adopted by the SE community to keep relevant evidence up to date and to avoid wasting effort with unnecessary updates. © 2020
  •  
39.
  • Middleton, Anna, et al. (författare)
  • Global Public Perceptions of Genomic Data Sharing : What Shapes the Willingness to Donate DNA and Health Data?
  • 2020
  • Ingår i: American Journal of Human Genetics. - : Elsevier BV. - 0002-9297 .- 1537-6605. ; 107:4, s. 743-752
  • Tidskriftsartikel (refereegranskat)abstract
    • Analyzing genomic data across populations is central to understanding the role of genetic factors in health and disease. Successful data sharing relies on public support, which requires attention to whether people around the world are willing to donate their data that are then subsequently shared with others for research. However, studies of such public perceptions are geographically limited and do not enable comparison. This paper presents results from a very large public survey on attitudes toward genomic data sharing. Data from 36,268 individuals across 22 countries (gathered in 15 languages) are presented. In general, publics across the world do not appear to be aware of, nor familiar with, the concepts of DNA, genetics, and genomics. Willingness to donate one's DNA and health data for research is relatively low, and trust in the process of data's being shared with multiple users (e.g., doctors, researchers, governments) is also low. Participants were most willing to donate DNA or health information for research when the recipient was specified as a medical doctor and least willing to donate when the recipient was a for-profit researcher. Those who were familiar with genetics and who were trusting of the users asking for data were more likely to be willing to donate. However, less than half of participants trusted more than one potential user of data, although this varied across countries. Genetic information was not uniformly seen as different from other forms of health information, but there was an association between seeing genetic information as special in some way compared to other health data and increased willingness to donate. The global perspective provided by our "Your DNA, Your Say" study is valuable for informing the development of international policy and practice for sharing genomic data. It highlights that the research community not only needs to be worthy of trust by the public, but also urgent steps need to be taken to authentically communicate why genomic research is necessary and how data donation, and subsequent sharing, is integral to this.
  •  
40.
  • Milne, Richard, et al. (författare)
  • Demonstrating trustworthiness when collecting and sharing genomic data : public views across 22 countries
  • 2021
  • Ingår i: Genome Medicine. - : Springer Science and Business Media LLC. - 1756-994X. ; 13:1
  • Tidskriftsartikel (refereegranskat)abstract
    • BackgroundPublic trust is central to the collection of genomic and health data and the sustainability of genomic research. To merit trust, those involved in collecting and sharing data need to demonstrate they are trustworthy. However, it is unclear what measures are most likely to demonstrate this.MethodsWe analyse the ‘Your DNA, Your Say’ online survey of public perspectives on genomic data sharing including responses from 36,268 individuals across 22 low-, middle- and high-income countries, gathered in 15 languages. We examine how participants perceived the relative value of measures to demonstrate the trustworthiness of those using donated DNA and/or medical information. We examine between-country variation and present a consolidated ranking of measures.ResultsProviding transparent information about who will benefit from data access was the most important measure to increase trust, endorsed by more than 50% of participants across 20 of 22 countries. It was followed by the option to withdraw data and transparency about who is using data and why. Variation was found for the importance of measures, notably information about sanctions for misuse of data—endorsed by 5% in India but almost 60% in Japan. A clustering analysis suggests alignment between some countries in the assessment of specific measures, such as the UK and Canada, Spain and Mexico and Portugal and Brazil. China and Russia are less closely aligned with other countries in terms of the value of the measures presented.ConclusionsOur findings highlight the importance of transparency about data use and about the goals and potential benefits associated with data sharing, including to whom such benefits accrue. They show that members of the public value knowing what benefits accrue from the use of data. The study highlights the importance of locally sensitive measures to increase trust as genomic data sharing continues globally.
  •  
41.
  • Milne, Richard, et al. (författare)
  • Return of genomic results does not motivate intent to participate in research for all: Perspectives across 22 countries
  • 2022
  • Ingår i: Genetics in Medicine. - : Elsevier BV. - 1098-3600 .- 1530-0366. ; 24:5, s. 1120-1129
  • Tidskriftsartikel (refereegranskat)abstract
    • Purpose: The aim of this study was to determine how attitudes toward the return of genomic research results vary internationally. Methods: We analyzed the “Your DNA, Your Say” online survey of public perspectives on genomic data sharing including responses from 36,268 individuals across 22 low-, middle-, and high-income countries, and these were gathered in 15 languages. We analyzed how participants responded when asked whether return of results (RoR) would motivate their decision to donate DNA or health data. We examined variation across the study countries and compared the responses of participants from other countries with those from the United States, which has been the subject of the majority of research on return of genomic results to date. Results: There was substantial variation in the extent to which respondents reported being influenced by RoR. However, only respondents from Russia were more influenced than those from the United States, and respondents from 20 countries had lower odds of being partially or wholly influenced than those from the United States. Conclusion: There is substantial international variation in the extent to which the RoR may motivate people's intent to donate DNA or health data. The United States may not be a clear indicator of global attitudes. Participants’ preferences for return of genomic results globally should be considered.
  •  
42.
  • Minku, Leandro, et al. (författare)
  • How to Make Best Use of Cross-Company Data for Web Effort Estimation?
  • 2015
  • Ingår i: 2015 ACM/IEEE INTERNATIONAL SYMPOSIUM ON EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT (ESEM). - 9781467378994 ; , s. 172-181
  • Konferensbidrag (refereegranskat)abstract
    • [Context]: The numerous challenges that can hinder software companies from gathering their own data have motivated over the past 15 years research on the use of cross-company (CC) datasets for software effort prediction. Part of this research focused on Web effort prediction, given the large increase worldwide in the development of Web applications. Some of these studies indicate that it may be possible to achieve better performance using CC models if some strategy to make the CC data more similar to the within-company (WC) data is adopted. [Goal]: This study investigates the use of a recently proposed approach called Dycom to assess to what extent Web effort predictions obtained using CC datasets are effective in relation to the predictions obtained using WC data when explicitly mapping the CC models to the WC context. [Method]: Data on 125 Web projects from eight different companies part of the Tukutuku database were used to build prediction models. We benchmarked these models against baseline models (mean and median effort) and a WC base learner that does not benefit of the mapping. We also compared Dycom against a competitive CC approach from the literature (NN-filtering). We report a company-by-company analysis. [Results]: Dycom usually managed to achieve similar or better performance than a WC model while using only half of the WC training data. These results are also an improvement over previous studies that investigated the use of different strategies to adapt CC models to the WC data for Web effort estimation. [Conclusions]: We conclude that the use of Dycom for Web effort prediction is quite promising and in general supports previous results when applying Dycom to conventional software datasets.
  •  
43.
  • Molléri, Jefferson Seide, et al. (författare)
  • Aligning the Views of Research Quality in Empirical Software Engineering
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Context: Research quality is intended to assess the design and reporting of studies. It comprises a series of concepts such as methodological rigor, practical relevance, and conformance to ethical standards. Depending on the perspective, different views of importance are given to the conceptual dimensions of research quality.Objective: We intend to assess the level of alignment between researchers with regard to a conceptual model of research quality. This includes aligning the definition of research quality and reasoning on the relative importance of quality characteristics.Method: We conducted a mixed methods approach comprising an internal case study and a complementary focus group. We carried out a hierarchical voting prioritization based on the conceptual model to collect relative values for importance. In the focus group, we also moderate discussions with experts to address potential misalignment.Results: The alignment at the research group level was higher compared to that at community level. Moreover, the interdisciplinary conceptual quality model was seeing to express fairly the quality of research, but presented limitations regarding its structure and components' description, which resulted in an updated model. Conclusion: The interdisciplinary model used was suitable for the software engineering context. The process used for reflecting on the alignment of quality with respect to definitions and priorities was working well. 
  •  
44.
  • Molléri, Jefferson Seide, et al. (författare)
  • An Empirically Evaluated Checklist for Surveys in Software Engineering
  • 2020
  • Ingår i: Information and Software Technology. - : Elsevier. - 0950-5849 .- 1873-6025.
  • Tidskriftsartikel (refereegranskat)abstract
    • Context: Over the past decade Software Engineering research has seen a steady increase in survey-based studies, and there are several guidelines providing support for those willing to carry out surveys. The need for auditing survey research has been raised in the literature. Checklists have been used to assess different types of empirical studies, such as experiments and case studies.Objective: This paper proposes a checklist to support the design and assessment of survey-based research in software engineering grounded in existing guidelines for survey research. We further evaluated the checklist in the research practice context.Method: To construct the checklist, we systematically aggregated knowledge from 12 methodological studies supporting survey-based research in software engineering. We identified the key stages of the survey process and its recommended practices through thematic analysis and vote counting. To improve our initially designed checklist we evaluated it using a mixed evaluation approach involving experienced researchers.Results: The evaluation provided insights regarding the limitations of the checklist in relation to its understanding and objectivity. In particular, 19 of the 38 checklist items were improved according to the feedback received from its evaluation. Finally, a discussion on how to use the checklist and what its implications are for research practice is also provided.Conclusion: The proposed checklist is an instrument suitable for auditing survey reports as well as a support tool to guide ongoing research with regard to the survey design process.
  •  
45.
  • Molléri, Jefferson Seide, et al. (författare)
  • CERSE - Catalog for empirical research in software engineering : A Systematic mapping study
  • 2019
  • Ingår i: Information and Software Technology. - : Elsevier B.V.. - 0950-5849 .- 1873-6025. ; 105, s. 117-149
  • Tidskriftsartikel (refereegranskat)abstract
    • Context Empirical research in software engineering contributes towards developing scientific knowledge in this field, which in turn is relevant to inform decision-making in industry. A number of empirical studies have been carried out to date in software engineering, and the need for guidelines for conducting and evaluating such research has been stressed. Objective: The main goal of this mapping study is to identify and summarize the body of knowledge on research guidelines, assessment instruments and knowledge organization systems on how to conduct and evaluate empirical research in software engineering. Method: A systematic mapping study employing manual search and snowballing techniques was carried out to identify the suitable papers. To build up the catalog, we extracted and categorized information provided by the identified papers. Results: The mapping study comprises a list of 341 methodological papers, classified according to research methods, research phases covered, and type of instrument provided. Later, we derived a brief explanatory review of the instruments provided for each of the research methods. Conclusion: We provide: an aggregated body of knowledge on the state of the art relating to guidelines, assessment instruments and knowledge organization systems for carrying out empirical software engineering research; an exemplary usage scenario that can be used to guide those carrying out such studies is also provided. Finally, we discuss the catalog's implications for research practice and the needs for further research. © 2018 Elsevier B.V.
  •  
46.
  • Molléri, Jefferson Seide, et al. (författare)
  • Determining a core view of research quality in empirical software engineering
  • 2023
  • Ingår i: Computer Standards & Interfaces. - : Elsevier. - 0920-5489 .- 1872-7018. ; 84
  • Tidskriftsartikel (refereegranskat)abstract
    • Context: Research quality is intended to appraise the design and reporting of studies. It comprises a set of standards such as methodological rigor, practical relevance, and conformance to ethical standards. Depending on the perspective, different views of importance are given to the standards for research quality. Objective: To investigate the suitability of a conceptual model of research quality to Software Engineering (SE), from the perspective of researchers engaged in Empirical Software Engineering (ESE) research, in order to understand the core value of research quality. Method: We conducted a mixed-methods approach with two distinct group perspectives: (i) a research group; and (ii) the empirical SE research community. Our data collection approach comprised a questionnaire survey and a complementary focus group. We carried out a hierarchical voting prioritization to collect relative values for importance of standards for research quality. Results: In the context of this research, ‘internally valid’, ‘relevant research idea’, and ‘applicable results’ are perceived as the core standards for research quality in empirical SE. The alignment at the research group level was higher compared to that at the community level. Conclusion: The conceptual model was seen to express fairly the standards for research quality in the SE context. It presented limitations regarding its structure and components’ description, which resulted in an updated model. © 2022
  •  
47.
  • Molléri, Jefferson Seide, et al. (författare)
  • Reasoning about Research Quality Alignment in Software Engineering
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Context: Research quality is intended to assess the design and reporting of studies. It comprises a series of concepts such as methodological rigor, practical relevance, and conformance to ethical standards. Depending on the perspective, different views of importance are given to the conceptual dimensions of research quality.Objective: We aim to better understand what constitutes research quality from the perspective of the empirical software engineering community. In particular, we intend to assess the level of alignment between researchers with regard to a conceptual model of research quality.Method: We conducted a mixed methods approach comprising an internal case study and a complementary focus group. We carried out a hierarchical voting prioritization based on the conceptual model to collect relative values for importance. In the focus group, we also moderate discussions with experts to address potential misalignment.Results: We provide levels of alignment with regard to the importance of quality dimensions in the view of the participants. Moreover, the conceptual model fairly expresses the quality of research but has limitations with regards the structure and description of its components.Conclusion: Based on the results, we revised the conceptual model and provided an updated version adjusted to the context of empirical software engineering research. We also discussed how to assess quality alignment in research using our approach, and how to use the revised model of quality to characterize an assessment instrument.
  •  
48.
  • Molléri, Jefferson Seide, et al. (författare)
  • Towards understanding the relation between citations and research quality in software engineering studies
  • 2018
  • Ingår i: Scientometrics. - : Springer Netherlands. - 0138-9130 .- 1588-2861. ; 117:3, s. 1453-1487
  • Tidskriftsartikel (refereegranskat)abstract
    • The importance of achieving high quality in research practice has been highlighted in different disciplines. At the same time, citations are utilized to measure the impact of academic researchers and institutions. One open question is whether the quality in the reporting of research is related to scientific impact, which would be desired. In this exploratory study we aim to: (1) Investigate how consistently a scoring rubric for rigor and relevance has been used to assess research quality of software engineering studies; (2) Explore the relationship between rigor, relevance and citation count. Through backward snowball sampling we identified 718 primary studies assessed through the scoring rubric. We utilized cluster analysis and conditional inference tree to explore the relationship between quality in the reporting of research (represented by rigor and relevance) and scientiometrics (represented by normalized citations). The results show that only rigor is related to studies’ normalized citations. Besides that, confounding factors are likely to influence the number of citations. The results also suggest that the scoring rubric is not applied the same way by all studies, and one of the likely reasons is because it was found to be too abstract and in need to be further refined. Our findings could be used as a basis to further understand the relation between the quality in the reporting of research and scientific impact, and foster new discussions on how to fairly acknowledge studies for performing well with respect to the emphasized research quality. Furthermore, we highlighted the need to further improve the scoring rubric. © 2018, The Author(s).
  •  
49.
  • Molléri, Jefferson Seide (författare)
  • Views of Research Quality in Empirical Software Engineering
  • 2019
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Background. Software Engineering (SE) research, like other applied disciplines, intends to provide trustful evidence to practice. To ensure trustful evidence, a rigorous research process based on sound research methodologies is required. Further, to be practically relevant, researchers rely on identifying original research problems that are of interest to industry; and the research must fulfill various quality standards that form the basis for the evaluation of the empirical research in SE. A dialogue and shared view of quality standards for research practice is still to be achieved within the research community. Objectives. The main objective of this thesis is to foster dialogue and capture different views of SE researchers on method level (e.g., through the identification and reasoning on the importance of quality characteristics for experiments, surveys and case studies) as well as general quality standards for Empirical Software Engineering (ESE). Given the views of research quality, a second objective is to understand how to operationalize, i.e. build and validate instruments to assess research quality. Method. The thesis makes use of a mixed method approach of both qualitative and quantitative nature. The research methods used were case studies, surveys, and focus groups. A range of data collection methods has been employed, such as literature review, questionnaires, and semi-structured workshops. To analyze the data, we utilized content and thematic analysis, descriptive and inferential statistics.Results. We draw two distinct views of research quality. Through a top-down approach, we assessed and evolved a conceptual model of research quality within the ESE research community. Through a bottom-up approach, we built a checklist instrument for assessing survey-based research grounded on supporting literature and evaluated ours and others’ checklists in research practice and research education contexts.Conclusion. The quality standards we identified and operationalized support and extend the current understanding of research quality for SE research. This is a preliminary, but still vital, step towards a shared understanding and view of research quality for ESE research. Further steps are needed to gain a shared understanding of research quality within the community. 
  •  
50.
  • Molleri, Jefferson, et al. (författare)
  • Survey Guidelines in Software Engineering : An Annotated Review
  • 2016
  • Ingår i: ESEM'16. - New York, NY, USA : ASSOC COMPUTING MACHINERY.
  • Konferensbidrag (refereegranskat)abstract
    • Background: Survey is a method of research aiming to gather data from a large population of interest. Despite being extensively used in software engineering, survey-based research faces several challenges, such as selecting a representative population sample and designing the data collection instruments. Objective: This article aims to summarize the existing guidelines, supporting instruments and recommendations on how to conduct and evaluate survey-based research. Methods: A systematic search using manual search and snowballing techniques were used to identify primary studies supporting survey research in software engineering. We used an annotated review to present the findings, describing the references of interest in the research topic. Results: The summary provides a description of 15 available articles addressing the survey methodology, based upon which we derived a set of recommendations on how to conduct survey research, and their impact in the community. Conclusion: Survey-based research in software engineering has its particular challenges, as illustrated by several articles in this review. The annotated review can contribute by raising awareness of such challenges and present the proper recommendations to overcome them.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 83
Typ av publikation
tidskriftsartikel (43)
konferensbidrag (32)
annan publikation (5)
doktorsavhandling (1)
forskningsöversikt (1)
licentiatavhandling (1)
visa fler...
visa färre...
Typ av innehåll
refereegranskat (74)
övrigt vetenskapligt/konstnärligt (8)
populärvet., debatt m.m. (1)
Författare/redaktör
Mendes, Emilia (77)
Vishnubhotla, Sai Da ... (10)
Petersen, Kai (9)
Wohlin, Claes (8)
Kalinowski, Marcos (7)
Molléri, Jefferson S ... (7)
visa fler...
Lundberg, Lars (6)
Usman, Muhammad (6)
Salleh, Norsaremah (6)
Anderberg, Peter (5)
Britto, Ricardo, 198 ... (5)
Freitas, Vitor (4)
Felizardo, Katia (4)
Rodriguez, Pilar (4)
Middleton, Anna (3)
Felderer, Michael, 1 ... (3)
Wang, Nan (3)
Roberts, Jonathan (3)
Cong, Yali (3)
Patch, Christine (3)
Romano, Virginia (3)
Mascalzoni, Deborah, ... (3)
Prainsack, Barbara (3)
Mendes, Álvaro (3)
Britto, Ricardo (3)
Smith, James (3)
Stefansdottir, Vigdi ... (3)
Minku, Leandro (3)
Fernow, Josepine (3)
Tempero, Ewan (3)
Houeland, Gry (3)
Lokan, Chris (3)
Mendes, Fabiana (3)
Milne, Richard (3)
Thorogood, Adrian (3)
Kleiderman, Erika (3)
Bevan, Paul (3)
Steed, Claire (3)
Atutornu, Jerome (3)
Morley, Katherine I (3)
Almarri, Mohamed A (3)
Baranova, Elena E (3)
Cerezo, Maria (3)
Goodhand, Peter (3)
Hasan, Qurratulain (3)
Hibino, Aiko (3)
Izhevskaya, Vera L (3)
Jędrzejak, Aleksandr ... (3)
Jinhong, Cao (3)
Kimura, Megumi (3)
visa färre...
Lärosäte
Blekinge Tekniska Högskola (74)
Lunds universitet (8)
Uppsala universitet (6)
Karolinska Institutet (5)
Högskolan i Skövde (4)
Chalmers tekniska högskola (2)
visa fler...
Linnéuniversitetet (2)
Göteborgs universitet (1)
Högskolan i Halmstad (1)
Stockholms universitet (1)
visa färre...
Språk
Engelska (83)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (75)
Medicin och hälsovetenskap (11)
Samhällsvetenskap (8)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy