SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:1530 1362 "

Sökning: L773:1530 1362

  • Resultat 1-27 av 27
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ahmad, Azeem, et al. (författare)
  • A Multi-factor Approach for Flaky Test Detection and Automated Root Cause Analysis
  • 2021
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - : IEEE COMPUTER SOC. - 1530-1362. ; , s. 338-348
  • Konferensbidrag (refereegranskat)abstract
    • Developers often spend time to determine whether test case failures are real failures or flaky. The flaky tests, also known as non-deterministic tests, switch their outcomes without any modification in the codebase, hence reducing the confidence of developers during maintenance as well as in the quality of a product. Re-running test cases to reveal flakiness is resource-consuming, unreliable and does not reveal the root causes of test flakiness. Our paper evaluates a multi-factor approach to identify flaky test executions implemented in a tool named MDF laker. The four factors are: trace-back coverage, flaky frequency, number of test smells, and test size. Based on the extracted factors, MDFlaker uses k-Nearest Neighbor (KNN) to determine whether failed test executions are flaky. We investigate MDFlaker in a case study with 2166 test executions from different open-source repositories. We evaluate the effectiveness of our flaky detection tool. We illustrate how the multi-factor approach can be used to reveal root causes for flakiness, and we conduct a qualitative comparison between MDF laker and other tools proposed in literature. Our results show that the combination of different factors can be used to identify flaky tests. Each factor has its own trade-off, e.g., trace-back leads to many true positives, while flaky frequency yields more true negatives. Therefore, specific combinations of factors enable classification for testers with limited information (e.g., not enough test history information).
  •  
2.
  • Alahyari, Hiva, 1979, et al. (författare)
  • What Do Agile Teams Find Important for Their Success?
  • 2018
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - 1530-1362. ; 2018-December, s. 474-483
  • Konferensbidrag (refereegranskat)abstract
    • Although the general benefits of agile methods have been shown, it is not always clear what makes the application of agile successful or not in a company. With this motivation, we investigate agile success factors, particularly from the viewpoint of teams. We conduct in-company surveys to collect and rank agile team success factors, comparing these results with success factors found in the literature. Our results introduce new success factors not previously discussed in related work. The findings emphasize the importance of team environment, team spirit, and team capability as opposed to previous work which emphasizes project management process and customer involvement. These findings can help find issues and improve the performance of agile teams.
  •  
3.
  • Berbyuk Lindström, Nataliya, 1978, et al. (författare)
  • Understanding Metrics Team-Stakeholder Communication in Agile Metrics Service Delivery
  • 2021
  • Ingår i: APSEC (Asian Pacific Software Engineering conference), December 6-10, Taiwan-Virtual.. ; 2021-December, s. 401-409
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we explore challenges in communication between metrics teams and stakeholders in metrics service delivery. Drawing on interviews and interactive workshops with team members and stakeholders at two different Swedish agile software development organizations, we identify interrelated challenges such as aligning expectations, prioritizing demands, providing regular feedback, and maintaining continuous dialogue, which influence team-stakeholder interaction, relationships and performance. Our study shows the importance of understanding communicative hurdles and provides suggestions for their mitigation, therefore meriting further empirical research.
  •  
4.
  • Burden, Håkan, 1976, et al. (författare)
  • Executable and Translatable UML - How Difficult Can It Be?
  • 2011
  • Ingår i: 18th Asia Pacific Software Engineering Conference, APSEC 2011; Ho Chi Minh; Viet Nam; 5 December 2011 through 8 December 2011. - 1530-1362. - 9781457721991 ; , s. 114-121
  • Konferensbidrag (refereegranskat)abstract
    • Executable and Translatable UML enables Model-Driven Architecture by specifying Platform-Independent Models that can be automatically transformed into Platform-Specific Models through model compilation. Previous research shows that the transformations result in both efficient code and consistency between the models. However, there are neither results for the effort of introducing the technology in a new context nor on the level of expertise needed for designing the Platform-Independent Models. We wanted to know if teams of novice software developers could design Executable and Translatable UML models without prior experiences of software modelling. As part of a new university course we conducted an exploratory case study with two data collections over two years. Bachelor students were given the task to design a hotel reservation system and the necessary test cases for verifying the functionality and structure of the models within 300 hours, using Executable and Translatable UML. In total, 43 out of 50 teams succeeded in delivering verified and consistent models within the time frame. During the second data collection the students were given limited tool training. This gave a raise in the quality of the models. Due to the executable feature of the models the students were given constant feedback on their design until the models behaved as expected, with the required level of detail and structure. Our results show that using Executable and Translatable UML does not require more expertise than a bachelor program in computer science. All in all, Executable and Translatable UML could play an important role in future software development.
  •  
5.
  • Dakkak, Anas, et al. (författare)
  • Towards Continuous Data Collection from In-service Products: Exploring the Relation between Data Dimensions and Collection Challenges
  • 2021
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - 1530-1362. ; 2021-December, s. 243-252
  • Konferensbidrag (refereegranskat)abstract
    • Data collected from in-service products play an important role in enabling software-intensive embedded systems suppliers to embrace data-driven practices. Data can be used in many different ways such as to continuously learn and improve the product, enhance post-deployment services, reduce operational cost or create a better user experience. While there is no shortage of possible use cases leveraging data from in-service products, software-intensive embedded systems companies struggle to continuously collect data from their in-service products. Often, data collection is done in an ad-hoc way and targeting specific use cases or needs. Besides, few studies have investigated data collection challenges in relation to the data dimensions, which are the minimum set of quantifiable data aspects that can define software-intensive embedded product data from a collection point of view. To help address data collection challenges, and to provide companies with guidance on how to improve this process, we conducted a case study at a large multinational telecommunications supplier focusing on data characteristics and collection challenges from the Radio Access Networks (RAN) products. We further investigated the relations of these challenges to the data dimensions to increase our understanding of how data dominions contribute to the challenges.
  •  
6.
  • Figalist, Iris, et al. (författare)
  • Mining customer satisfaction on b2b online platforms using service quality and web usage metrics
  • 2020
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - : IEEE. - 1530-1362. ; 2020-December, s. 435-444, s. 435-444
  • Konferensbidrag (refereegranskat)abstract
    • In order to distinguish themselves from their competitors, software service providers constantly try to assess and improve customer satisfaction. However, measuring customer satisfaction in a continuous way is often time and cost intensive, or requires effort on the customer side. Especially in B2B contexts, a continuous assessment of customer satisfaction is difficult to achieve due to potential restrictions and complex provider-customer-end user setups. While concepts such as web usage mining enable software providers to get a deep understanding of how their products are used, its application to quantitatively measure customer satisfaction has not yet been studied in greater detail. For that reason, our study aims at combining existing knowledge on customer satisfaction, web usage mining, and B2B service characteristics to derive a model that enables an automated calculation of quantitative customer satisfaction scores. We apply web usage mining to validate these scores and to compare the usage behavior of satisfied and dissatisfied customers. This approach is based on domain-specific service quality and web usage metrics and is, therefore, suitable for continuous measurements without requiring active customer participation. The applicability of the model is validated by instantiating it in a real-world B2B online platform.
  •  
7.
  • Gencel, Cigdem, et al. (författare)
  • On the Relationship between Different Size Measures in the Software Life Cycle
  • 2009
  • Ingår i: 16th Asia-Pacific Software Engineering Conference, APSEC 2009; Penang; Malaysia; 1 December 2009 through 3 December 2009. - Penang, Malaysia : IEEE Computer Society. - 1530-1362. - 9780769539096 ; , s. 19-26
  • Konferensbidrag (refereegranskat)abstract
    • Various measures and methods have been developed to measure the sizes of different software entities produced throughout the software life cycle. Understanding the nature of the relationship between the sizes of these products has become significant due to various reasons. One major reason is the ability to predict the size of the later phase products by using the sizes of early life cycle products. For example, we need to predict the Source Lines of Code (SLOC) from Function Points (FP) since SLOC is being used as the main input for most of the estimation models when this measure is not available yet. SLOC/FP ratios have been used by the industry for such purposes even though the assumed linear relationship has not been validated yet. Similarly, FP has recently started to be used to predict the Bytes of code for estimating the amount of spare memory needed in systems. In this paper, we aim to investigate further the nature of the relationship between the software functional size and the code size by conducting a series of empirical studies.
  •  
8.
  • Gomes, Francisco, 1987, et al. (författare)
  • Visualizing test diversity to support test optimisation
  • 2018
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - 1530-1362. - 9781728119700 ; 2018-December
  • Konferensbidrag (refereegranskat)abstract
    • Diversity has been used as an effective criteria to optimise test suites for cost-effective testing. Particularly, diversity-based (alternatively referred to as similarity-based) techniques have the benefit of being generic and applicable across different Systems Under Test (SUT), and have been used to automatically select or prioritise large sets of test cases. However, it is a challenge to feedback diversity information to developers and testers since results are typically many-dimensional. Furthermore, the generality of diversity-based approaches makes it harder to choose when and where to apply them. In this paper we address these challenges by investigating: i) what are the trade-off in using different sources of diversity (e.g., diversity of test requirements or test scripts) to optimise large test suites, and ii) how visualisation of test diversity data can assist testers for test optimisation and improvement. We perform a case study on three industrial projects and present quantitative results on the fault detection capabilities and redundancy levels of different sets of test cases. Our key result is that test similarity maps, based on pair-wise diversity calculations, helped industrial practitioners identify issues with their test repositories and decide on actions to improve. We conclude that the visualisation of diversity information can assist testers  in their maintenance and optimisation activities.
  •  
9.
  • Hebig, Regina, et al. (författare)
  • Surveying the Corpus of Model Resolution Strategies for Metamodel Evolution
  • 2015
  • Ingår i: 22nd Asia-Pacific Software Engineering Conference, APSEC 2015; Holiday Inn Hotel at International AirportNew Delhi; India; 1 December 2015 through 4 December 2015. - 1530-1362. - 9781467396448 ; , s. 135-142
  • Konferensbidrag (refereegranskat)abstract
    • Modeling languages evolve regularly. Companies need to maintain all those models that are used in running projects, which can cause these projects to fall back in their schedules. Since 10 years research addresses this issue with approaches for automating co-evolution. The dominant core of these approaches are model resolution strategies. They define 1) how models have to be changed in reaction to specific metamodel changes, 2) what degree of automation can be reached, and 3) to what extent the user can control the resolution outcome. In this paper, we survey existing co-evolution approaches and analyze model resolution strategies. We present a corpus of more than 200 resolution strategies for 116 types of metamodel changes and discuss degree of automation and choices that users have today.
  •  
10.
  • Heldal, Rogardt, 1964, et al. (författare)
  • Modeling executable test actors: Exploratory study done in executable and translatable UML
  • 2012
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - 1530-1362. - 9780769549224 ; 1, s. 784-789
  • Konferensbidrag (refereegranskat)abstract
    • Model-based testing presents new challenges in how to perform software testing due to the fact that models offer testing on several abstraction levels. This is largely an unexplored area. We propose a pattern to construct test actors which can be used to test Platform-Independent Models. In addition, these test actors can also be automatically transformed to Platform-Specific Model level to test the implementation deployed on target. Our work is one step in the direction of permitting early testing without any waste, since the test models can be reused at a lower level of abstraction.
  •  
11.
  • Ho-Quang, Truong, et al. (författare)
  • Automatic classification of UML Class diagrams from images
  • 2014
  • Ingår i: Proceedings of the 21st Asia-Pacific Software Engineering Conference, APSEC 2014. - 1530-1362. - 9781479974252
  • Konferensbidrag (refereegranskat)abstract
    • - Graphical modelling of various aspects of software and systems is a common part of software development. UML is the de-facto standard for various types of software models. To be able to research UML, academia needs to have a corpus of UML models. For building such a database, an automated system that has the ability to classify UML class diagram images would be very beneficial, since a large portion of UML class diagrams (UML CDs) is available as images on the Internet. In this study, we propose 23 image-features and investigate the use of these features for the purpose of classifying UML CD images. We analyse the performance of the features and assess their contribution based on their Information Gain Attribute Evaluation scores. We study specificity and sensitivity scores of six classification algorithms on a set of 1300 images. We found that 19 out of 23 introduced features can be considered as influential predictors for classifying UML CD images. Through the six algorithms, the prediction rate achieves nearly 96% correctness for UML-CD and 91% of correctness for non-UML CD.
  •  
12.
  • John, Meenu Mary, et al. (författare)
  • AI deployment architecture: Multi-case study for key factor identification
  • 2020
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - : IEEE. - 1530-1362. ; 2020-December, s. 395-404
  • Konferensbidrag (refereegranskat)abstract
    • Machine learning and deep learning techniques are becoming increasingly popular and critical for companies as part of their systems. However, although the development and prototyping of ML/DL systems are common across companies, the transition from prototype to production-quality deployment models are challenging. One of the key challenges is how to determine the selection of an optimal architecture for AI deployment. Based on our previous research, and to offer support and guidance to practitioners, we developed a framework in which we present five architectural alternatives for AI deployment ranging from centralized to fully decentralized edge architectures. As part of our research, we validated the framework in software-intensive embedded system companies and identified key challenges they face when deploying ML/DL models. In this paper, and to further advance our research on this topic, we identify factors that help practitioners determine what architecture to select for the ML/D L model deployment. For this, we conducted a follow-up study involving interviews and workshops in seven case companies in the embedded systems domain. Based on our findings, we identify three key factors and develop a framework in which we outline how prioritization and trade-offs between these results in certain architecture. The contribution of the paper is threefold. First, we identify key factors critical for AI system deployment. Second, we present the architecture selection framework that explains how prioritization and trade-offs between key factors result in the selection of a certain architecture. Third, we discuss additional factors that mayor may not influence the selection of an optimal architecture.
  •  
13.
  • John, Meenu Mary, et al. (författare)
  • Exploring Trade-Offs in MLOps Adoption
  • 2023
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - 1530-1362. ; 30th Asia-Pacific Software Engineering Conference, APSEC 2023, s. 369-375
  • Konferensbidrag (refereegranskat)abstract
    • Machine Learning Operations (MLOps) play a crucial role in the success of data science projects in companies. However, despite its obvious benefits, several companies struggle to adopt MLOps practices and face difficulty in deciding how to deploy and evolve ML models. To gain a deeper understanding of these challenges, we conduct a multi-case study involving nine practitioners from seven companies. Based on our empirical results, we identify the key trade-offs we see companies make when adopting MLOps. We categorise these trade-offs into four concerns of the BAPO model: Business, Architecture, Process, and Organisation. Finally, we provide suggestions to mitigate the identified trade-offs. By identifying and detailing these trade-offs and the implications of these, this research helps companies to ensure the successful adoption of MLOps.
  •  
14.
  • Lindqvist, Erik, et al. (författare)
  • Outliers and Replication in Software Engineering
  • 2014
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - 1530-1362. - 9781479974252 ; 1, s. 207-214
  • Konferensbidrag (refereegranskat)abstract
    • Empirical software engineering is a research field of growing interest. Studies within this field handles an increasing amount of data. In order to replicate a study the data needs to be accessible and all processing of this data needs to be reproducible. Specifically, the handling of deviating data points, also known as outliers, needs to be documented in order for a study to be replicated. This study investigated the data availability for recently published studies within empirical software engineering. Furthermore, it also investigated if outliers are documented in the same research field. Papers were reviewed using a literature review and the presence of outliers was investigated using an unsupervised outlier detection method. Only 37% of the papers reviewed had their data accessible. Furthermore, in many cases outliers were present in the reviewed studies but 63% of the papers studies did not mention how outliers were handled. The data availability within empirical software engineering research is low and is hindering replication of studies. Additionally, the lack of documentation regarding how outliers are handled is hindering replication.
  •  
15.
  • Liu, Yuchu, 1992, et al. (författare)
  • Bayesian propensity score matching in automotive embedded software engineering
  • 2021
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - 1530-1362. ; 2021-December, s. 233-242
  • Konferensbidrag (refereegranskat)abstract
    • Randomised field experiments, such as A/B testing, have long been the gold standard for evaluating the value that new software brings to customers. However, running randomised field experiments is not always desired, possible or even ethical in the development of automotive embedded software. In the face of such restrictions, we propose the use of the Bayesian propensity score matching technique for causal inference of observational studies in the automotive domain. In this paper, we present a method based on the Bayesian propensity score matching framework, applied in the unique setting of automotive software engineering. This method is used to generate balanced control and treatment groups from an observational online evaluation and estimate causal treatment effects from the software changes, even with limited samples in the treatment group. We exemplify the method with a proof-of-concept in the automotive domain. In the example, we have a larger control (Nc = 1100) fleet of cars using the current software and a small treatment fleet (Nt = 38), in which we introduce a new software variant. We demonstrate a scenario that shipping of a new software to all users is restricted, as a result, a fully randomised experiment could not be conducted. Therefore, we utilised the Bayesian propensity score matching method with 14 observed covariates as inputs. The results show more balanced groups, suitable for estimating causal treatment effects from the collected observational data. We describe the method in detail and share our configuration. Furthermore, we discuss how can such a method be used for online evaluation of new software utilising small groups of samples.
  •  
16.
  • Marculescu, Bogdan, et al. (författare)
  • Finding a Boundary between Valid and Invalid Regions of the Input Space
  • 2018
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - : IEEE Computer Society. - 9781728119700 ; 2018-December, s. 169-178
  • Konferensbidrag (refereegranskat)abstract
    • In the context of robustness testing, the boundary between the valid and invalid regions of the input space can be an interesting source of erroneous inputs. Knowing where a specific software under test (SUT) has a boundary is also essential for validation in relation to requirements. However, finding where a SUT actually implements the boundary is a non-trivial problem that has not gotten much attention. This paper proposes a method of finding the boundary between the valid and invalid regions of the input space, by developing pairs of test sets that describe that boundary in detail. The proposed method consists of two steps. First, test data generators, directed by a search algorithm to maximise distance to known, valid test cases, generate valid test cases that are closer to the boundary. Second, these valid test cases undergo mutations to try to push them over the boundary and into the invalid part of the input space. This results in a pair of test sets, one consisting of test cases on the valid side of the boundary and a matched set on the outer side, with only a small distance between the two sets. The method is evaluated on a number of examples from the standard library of a modern programming language. We propose a method of determining the boundary between valid and invalid regions of the input space, and apply it on a SUT that has a non-contiguous valid region of the input space. From the small distance between the developed pairs of test sets, and the fact that one test set contains valid test cases and the other invalid test cases, we conclude that the pair of test sets described the boundary between the valid and invalid regions of that input space. Differences of behaviour can be observed between different distances and different sets of mutation operators, but all show that the method is able to identify the boundary between the valid and invalid regions of the input space. This is an important step towards more automated robustness testing. © 2018 IEEE.
  •  
17.
  • Marculescu, Bogdan, et al. (författare)
  • Practitioner-Oriented Visualization in an Interactive Search-Based Software Test Creation Tool
  • 2013
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - : IEEE Computer Society. - 1530-1362. - 9781479921430 ; 2, s. 87-92
  • Konferensbidrag (refereegranskat)abstract
    • Search-based software testing uses meta-heuristic search techniques to automate or partially automate testing tasks, such as test case generation or test data generation. It uses a fitness function to encode the quality characteristics that are relevant, for a given problem, and guides the search to acceptable solutions in a potentially vast search space. From an industrial perspective, this opens up the possibility of generating and evaluating lots of test cases without raising costs to unacceptable levels. First, however, the applicability of search-based software engineering in an industrial setting must be evaluated. In practice, it is difficult to develop a priori a fitness function that covers all practical aspects of a problem. Interaction with human experts offers access to experience that is otherwise unavailable and allows the creation of a more informed and accurate fitness function. Moreover, our industrial partner has already expressed a view that the knowledge and experience of domain specialists are more important to the overall quality of the systems they develop than software engineering expertise. In this paper we describe our application of Interactive Search Based Software Testing (ISBST) in an industrial setting. We used SBST to search for test cases for an industrial software module and based, in part, on interaction with a human domain specialist. Our evaluation showed that such an approach is feasible, though it also identified potential difficulties relating to the interaction between the domain specialist and the system.
  •  
18.
  • Martini, Antonio, 1982, et al. (författare)
  • Process Debt: a First Exploration
  • 2020
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - 1530-1362. ; 2020-December, s. 316-325
  • Konferensbidrag (refereegranskat)abstract
    • Process Debt, like Technical Debt, can be the source of short-term benefits but often is harmful in the long term for a software organization. However, we do not know much about Process Debt from current literature. We conducted an exploratory study of Process Debt in four international organizations by interviewing 16 practitioners. The findings show that Process Debt can be a harmful phenomenon that needs attention and new practices, as it is different from Technical Debt. We provide an initial framework, composed by a definition and a conceptual model for Process Debt, showing types, causes, consequences, and debt accumulation patterns.
  •  
19.
  • Martini, Antonio, 1982, et al. (författare)
  • The introduction of technical debt tracking in large companies. A Survey and Multiple Case-Study
  • 2016
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - 1530-1362. ; 0, s. 161-168
  • Konferensbidrag (refereegranskat)abstract
    • Large software companies need to support continuous and fast delivery of customer value both in the short and long term. However, this can be hindered if both evolution and maintenance of existing systems are hampered by Technical Debt. Although a lot of theoretical work on Technical Debt has been recently produced, its practical management lacks empirical studies. In this paper we investigate the state of practice in several companies in order to understand how they start tracking Technical Debt. We combined different methodologies: we conducted a survey, involving 226 respondents from 15 organizations and a more in-depth multiple case-study in three organizations, where Technical Debt was tracked: we involved 13 interviews and 79 Technical Debt issues analysis. We found that the development time dedicated to manage Technical Debt is substantial (around 25% of the overall development) but not systematic: only a few participants methodically track Technical Debt. By studying the approaches in the companies participating in the case-study, we understood how companies start tracking Technical Debt and what are the initial benefits and challenges. Finally, we propose a Strategic Adoption Model based to define and adopt a dedicated process for tracking Technical Debt.
  •  
20.
  • Munappy, Aiswarya Raj, 1990, et al. (författare)
  • Maturity Assessment Model for Industrial Data Pipelines
  • 2023
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - 1530-1362. ; 30th Asia-Pacific Software Engineering Conference, APSEC 2023, s. 503-513
  • Konferensbidrag (refereegranskat)abstract
    • Data pipelines can be defined as a complex chain of interconnected activities that starts with a data source and ends in a data sink. They can process data in multiple formats from various data sources with minimal human intervention, speed up data life cycle operations, and enhance productivity in data-driven organizations. As a result, companies place a high value on strengthening the maturity of their data pipelines. The available literature, on the other hand, is significantly insufficient in terms of providing a comprehensive roadmap to guide companies in assessing the maturity of their data pipelines. Therefore, this case study focuses on developing a data pipeline maturity assessment model that can evaluate the maturity of data pipelines in a staged manner from maturity level 1 to maturity level 5. We conducted empirical research in order to develop the maturity assessment model on the basis of five different determinants to address the specific needs of each data pipeline maturity level. Accordingly, it aims to support organizations in assessing their current data pipeline maturity, determining challenges at each stage, and preparing an extensive roadmap and suggestions for data pipeline maturity improvement. In future work, we plan to employ the maturity model in different companies as a case study to evaluate its applicability and usefulness.
  •  
21.
  • Munappy, Aiswarya Raj, 1990, et al. (författare)
  • On the Impact of ML use cases on Industrial Data Pipelines
  • 2021
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - : IEEE. - 1530-1362. ; 2021-December, s. 463-472, s. 463-472
  • Konferensbidrag (refereegranskat)abstract
    • The impact of the Artificial Intelligence revolution is undoubtedly substantial in our society, life, firms, and employment. With data being a critical element, organizations are working towards obtaining high-quality data to train their AI models. Although data, data management, and data pipelines are part of industrial practice even before the introduction of ML models, the significance of data increased further with the advent of ML models, which force data pipeline developers to go beyond the traditional focus on data quality. The objective of this study is to analyze the impact of ML use cases on data pipelines. We assume that the data pipelines that serve ML models are given more importance compared to the conventional data pipelines. We report on a study that we conducted by observing software teams at three companies as they develop both conventional(Non-ML) data pipelines and data pipelines that serve ML-based applications. We study six data pipelines from three companies and categorize them based on their criticality and purpose. Further, we identify the determinants that can be used to compare the development and maintenance of these data pipelines. Finally, we map these factors in a two-dimensional space to illustrate their importance on a scale of low, moderate, and high.
  •  
22.
  • Munappy, Aiswarya Raj, 1990, et al. (författare)
  • Towards automated detection of data pipeline faults
  • 2020
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - : IEEE. - 1530-1362. ; 2020-December, s. 346-355, s. 346-355
  • Konferensbidrag (refereegranskat)abstract
    • Data pipelines play an important role throughout the data management process. It automates the steps ranging from data generation to data reception thereby reducing the human intervention. A failure or fault in a single step of a data pipeline has cascading effects that might result in hours of manual intervention and clean-up. Data pipeline failure due to faults at different stages of data pipelines is a common challenge that eventually leads to significant performance degradation of data-intensive systems. To ensure early detection of these faults and to increase the quality of the data products, continuous monitoring and fault detection mechanism should be included in the data pipeline. In this study, we have explored the need for incorporating automated fault detection mechanisms and mitigation strategies at different stages of the data pipeline. Further, we identified faults at different stages of the data pipeline and possible mitigation strategies that can be adopted for reducing the impact of data pipeline faults thereby improving the quality of data products. The idea of incorporating fault detection and mitigation strategies is validated by realizing a small part of the data pipeline using action research in the analytics team at a large software-intensive organization within the telecommunication domain.
  •  
23.
  • Persson Dahlqvist, Annita, et al. (författare)
  • Quality Improvements by Integrating Development Processes
  • 2004
  • Ingår i: 11th Asia-Pacific Software Engineering Conference (APSEC 2004). - 1530-1362. - 0769522459 ; , s. 64-72
  • Konferensbidrag (refereegranskat)abstract
    • Software is an increasing and important part of many products and systems. Software, hardware, and system level components have been developed and produced following separate processes. However, in order to improve the quality of the final complex product, requirements and prospects for an automatic integrated process support are called for. Product data management (PDM) has focused on hardware products, while software configuration management (SCM) has aimed to support software development. Several attempts to integrate tools from these domains exist, but they all show small visible success. The reason for this is that integration goes far beyond tool issues only. According to our experiences, three main factors play a crucial role for a successful integration: tools and technologies, processes, and people. This paper analyses the main characteristics of PDM and SCM, describes the three integration factors, identifies a model for the integration process, and pin-points the main challenges to achieve a successful integration of hardware and software development. The complexity of the problems is shown through several case studies.
  •  
24.
  • Shahrokni, Ali, 1982, et al. (författare)
  • RobusTest: A Framework for Automated Testing of Software Robustness
  • 2011
  • Ingår i: APSEC 2011. - 1530-1362. - 9780769546094 ; , s. 171-178
  • Konferensbidrag (refereegranskat)abstract
    • Robustness of a software system is defined as the degree to which the system can behave ordinarily and in conformance with the requirements in extraordinary situations. By increasing the robustness many failures which decrease the quality of the system can be avoided or masked. When it comes to specifying, testing and assessing software robustness in an efficient manner the methods and techniques are not mature yet.This paper presents RobusTest, a framework for testing robustness properties of a system with currently focus on timing issues. The expected robust behavior of the system is formulated as properties. The properties are then used to automatically generate robustness test cases and assess the results. An implementation of RobusTest in Java is presented here together with results from testing different, open-source implementations of the XMPP instant messaging protocol. By executing 400 test cases that were automatically generated from properties on two such implementations we found 11 critical failures and 15 nonconformance problems as compared to the XMPP specification.
  •  
25.
  • Sica de Andrade, Hugo, 1989, et al. (författare)
  • A Review on Software Architectures for Heterogeneous Platforms
  • 2018
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - 1530-1362. ; 2018-December, s. 209-218
  • Konferensbidrag (refereegranskat)abstract
    • The increasing demands for computing performance have been a reality regardless of the requirements for smaller and more energy efficient devices. Throughout the years, the strategy adopted by industry was to increase the robustness of a single processor by increasing its clock frequency and mounting more transistors so more calculations could be executed. However, it is known that the physical limits of such processors are being reached, and one way to fulfill such increasing computing demands has been to adopt a strategy based on heterogeneous computing, i.e., using a heterogeneous platform containing more than one type of processor. This way, different types of tasks can be executed by processors that are specialized in them. Heterogeneous computing, however, poses a number of challenges to software engineering, especially in the architecture and deployment phases. In this paper, we conduct an empirical study that aims at discovering the state-of-the-art in software architecture for heterogeneous computing, with focus on deployment. We conduct a systematic mapping study that retrieved 28 studies, which were critically assessed to obtain an overview of the research field. We identified gaps and trends that can be used by both researchers and practitioners as guides to further investigate the topic.
  •  
26.
  • Sica de Andrade, Hugo, 1989, et al. (författare)
  • Principles for Re-architecting Software for Heterogeneous Platforms
  • 2020
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - 1530-1362. ; 2020-December, s. 405-414
  • Konferensbidrag (refereegranskat)abstract
    • The demands on software continues to increase through the constant addition of functionalities and high expectations from users. In particular, performance has been the focus in many projects with the goal of fulfilling complex and hard requirements across a variety of domains. One way to achieve satisfactory levels of performance is through heterogeneous computing, i.e., systems that contain more than one type of processing unit, such as CPUs, GPUs, and FPGAs. However, applications are typically designed to be executed on CPUs, and re-architecting software for execution on such heterogeneous hardware architectures entails several challenges that must be addressed. In this paper, we propose a framework that supports engineers in the process of making architectural decisions to re-architect software for execution on heterogeneous platforms. We present several relevant aspects that should be addressed in the process, along with suggestions on how to create design solutions using different existing approaches. The framework was developed based on multiple interactions with three industrial partners and evaluated through a computer vision application in the automotive domain.
  •  
27.
  • Zhang, Hongyi, 1996, et al. (författare)
  • Federated learning systems: Architecture alternatives
  • 2020
  • Ingår i: Proceedings - Asia-Pacific Software Engineering Conference, APSEC. - : IEEE. - 1530-1362. ; 2020-December, s. 385-394, s. 385-394
  • Konferensbidrag (refereegranskat)abstract
    • Machine Learning (ML) and Artificial Intelligence (AI) have increasingly gained attention in research and industry. Federated Learning, as an approach to distributed learning, shows its potential with the increasing number of devices on the edge and the development of computing power. However, most of the current Federated Learning systems apply a single-server centralized architecture, which may cause several critical problems, such as the single-point of failure as well as scaling and performance problems. In this paper, we propose and compare four architecture alternatives for a Federated Learning system, i.e. centralized, hierarchical, regional and decentralized architectures. We conduct the study by using two well-known data sets and measuring several system performance metrics for all four alternatives. Our results suggest scenarios and use cases which are suitable for each alternative. In addition, we investigate the trade-off between communication latency, model evolution time and the model classification performance, which is crucial to applying the results into real-world industrial systems.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-27 av 27
Typ av publikation
konferensbidrag (27)
Typ av innehåll
refereegranskat (27)
Författare/redaktör
Bosch, Jan, 1967 (12)
Olsson, Helena Holms ... (8)
Heldal, Rogardt, 196 ... (3)
Munappy, Aiswarya Ra ... (3)
Feldt, Robert (2)
Feldt, Robert, 1972 (2)
visa fler...
Martini, Antonio, 19 ... (2)
Besker, Terese, 1970 (2)
Crnkovic, Ivica, 195 ... (2)
Issa Mattos, David, ... (2)
Zhang, Hongyi, 1996 (2)
Sica de Andrade, Hug ... (2)
Marculescu, Bogdan (2)
Staron, Miroslaw, 19 ... (1)
Jansson, Anders (1)
Berbyuk Lindström, N ... (1)
Horkoff, Jennifer, 1 ... (1)
Sandahl, Kristian (1)
Crnkovic, Ivica (1)
Chaudron, Michel, 19 ... (1)
Hebig, Regina (1)
Torkar, Richard (1)
Torkar, Richard, 197 ... (1)
Ahmad, Azeem (1)
de Oliveira Neto, Fr ... (1)
Shi, Zhixiang (1)
Leifler, Ola (1)
Berger, Christian, 1 ... (1)
Gomes, Francisco, 19 ... (1)
Meding, Wilhelm (1)
Alahyari, Hiva, 1979 (1)
Matsson, Olliver (1)
Egenvall, Kim (1)
Burden, Håkan, 1976 (1)
Persson, F. (1)
Asklund, Ulf (1)
Ho-Quang, Truong (1)
Gencel, Cigdem (1)
Söder, Ola (1)
Koutsikouri, Dina, 1 ... (1)
Shahrokni, Ali, 1982 (1)
Holmström Olsson, He ... (1)
Lindqvist, Erik (1)
Siljamäki, Toni (1)
Dakkak, Anas (1)
Erlenhov, Linda, 197 ... (1)
Figalist, Iris (1)
Elsner, Christoph (1)
Dieffenbacher, Marco (1)
Eigner, Isabella (1)
visa färre...
Lärosäte
Chalmers tekniska högskola (25)
Göteborgs universitet (8)
Malmö universitet (5)
Blekinge Tekniska Högskola (3)
Linköpings universitet (1)
Lunds universitet (1)
Språk
Engelska (27)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (25)
Teknik (11)
Samhällsvetenskap (3)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy