SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Ericsson Morgan 1973) "

Search: WFRF:(Ericsson Morgan 1973)

  • Result 1-10 of 52
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Alégroth, Emil, 1984, et al. (author)
  • Teaching scrum – what we did, what we will do and what impedes us
  • 2015
  • In: Lecture Notes in Business Information Processing. - 1865-1348 .- 1865-1356. - 9783319186115 ; 212, s. 361-362
  • Conference paper (other academic/artistic)abstract
    • This paper analyses the way we teach Scrum. We reflect on our intended learning outcomes, which challenges we find in teaching Scrum and which lessons we have learned during the last four years. We also give an outlook on the way we want to introduce and apply Scrum in our teaching and how we intend to improve the curriculum.
  •  
2.
  • Ambrosius, Robin, et al. (author)
  • Interviews Aided with Machine Learning
  • 2018
  • In: Perspectives in Business Informatics Research. BIR 2018. - Cham : Springer. - 9783319999500 - 9783319999517 ; , s. 202-216
  • Conference paper (peer-reviewed)abstract
    • We have designed and implemented a Computer Aided Personal Interview (CAPI) system that learns from expert interviews and can support less experienced interviewers by for example suggesting questions to ask or skip. We were particularly interested to streamline the due diligence process when estimating the value for software startups. For our design we evaluated some machine learning algorithms and their trade-offs, and in a small case study we evaluates their implementation and performance. We find that while there is room for improvement, the system can learn and recommend questions. The CAPI system can in principle be applied to any domain in which long interview sessions should be shortened without sacrificing the quality of the assessment.
  •  
3.
  • Ericsson, Morgan, 1973, et al. (author)
  • Mining Job Ads to find what skills are sought after from an employers' perspective on IT graduates
  • 2014
  • In: ITICSE 2014 - Proceedings of the 2014 Innovation and Technology in Computer Science Education Conference. - New York, New York, USA : Association for Computing Machinery (ACM). - 9781450328333
  • Conference paper (peer-reviewed)abstract
    • We mine job ads to discover what skills are required from an employers' perspective. Some obvious trends appear, such as skills related to web and mobile technology. We aim to uncover more detailed information as the study continues to allow course content to better match the expressed needs. Categories and Subject Descriptors K.3.2 [Computers and Education]: Computer and Information Science Education General Terms Data mining.
  •  
4.
  • Ericsson, Morgan, Docent, 1973-, et al. (author)
  • TDMentions : A Dataset of Technical Debt Mentions in Online Posts
  • 2019
  • In: 2019 IEEE/ACM INTERNATIONAL CONFERENCE ON TECHNICAL DEBT (TECHDEBT 2019). - : IEEE. - 9781728133713 ; , s. 123-124
  • Conference paper (peer-reviewed)abstract
    • The term technical debt is easy to understand as a metaphor, but can quickly grow complex in practice. We contribute with a dataset, TDMentions, that enables researchers to study how developers and end users use the term technical debt in online posts and discussions. The dataset consists of posts from news aggregators and Q&A-sites, blog posts, and issues and commits on GitHub.
  •  
5.
  • Hönel, Sebastian, et al. (author)
  • A changeset-based approach to assess source code density and developer efficacy
  • 2018
  • In: ICSE '18 Proceedings of the 40th International Conference on Software Engineering: Companion Proceeedings. - New York, NY, USA : IEEE. - 9781450356633 ; , s. 220-221
  • Conference paper (peer-reviewed)abstract
    • The productivity of a (team of) developer(s) can be expressed as a ratio between effort and delivered functionality. Several different estimation models have been proposed. These are based on statistical analysis of real development projects; their accuracy depends on the number and the precision of data points. We propose a data-driven method to automate the generation of precise data points. Functionality is proportional to the code size and Lines of Code (LoC) is a fundamental metric of code size. However, code size and LoC are not well defined as they could include or exclude lines that do not affect the delivered functionality. We present a new approach to measure the density of code in software repositories. We demonstrate how the accuracy of development time spent in relation to delivered code can be improved when basing it on net-instead of the gross-size measurements. We validated our tool by studying ca. 1,650 open-source software projects.
  •  
6.
  • Hönel, Sebastian, et al. (author)
  • Activity-Based Detection of (Anti-)Patterns : An Embedded Case Study of the Fire Drill
  • 2024
  • In: e-Informatica Software Engineering Journal. - : Wroclaw University of Science and Technology. - 1897-7979 .- 2084-4840. ; 18:1
  • Journal article (peer-reviewed)abstract
    • Background: Nowadays, expensive, error-prone, expert-based evaluations are needed to identify and assess software process anti-patterns. Process artifacts cannot be automatically used to quantitatively analyze and train prediction models without exact ground truth. Aim: Develop a replicable methodology for organizational learning from process (anti-)patterns, demonstrating the mining of reliable ground truth and exploitation of process artifacts. Method: We conduct an embedded case study to find manifestations of the Fire Drill anti-pattern in n = 15 projects. To ensure quality, three human experts agree. Their evaluation and the process’ artifacts are utilized to establish a quantitative understanding and train a prediction model. Results: Qualitative review shows many project issues. (i) Expert assessments consistently provide credible ground truth. (ii) Fire Drill phenomenological descriptions match project activity time (for example, development). (iii) Regression models trained on ≈ 12–25 examples are sufficiently stable. Conclusion: The approach is data source-independent (source code or issue-tracking). It allows leveraging process artifacts for establishing additional phenomenon knowledge and training robust predictive models. The results indicate the aptness of the methodology for the identification of the Fire Drill and similar anti-pattern instances modeled using activities. Such identification could be used in post mortem process analysis supporting organizational learning for improving processes.
  •  
7.
  • Hönel, Sebastian, et al. (author)
  • Bayesian Regression on segmented data using Kernel Density Estimation
  • 2019
  • In: 5th annual Big Data Conference. - : Zenodo.
  • Conference paper (other academic/artistic)abstract
    • The challenge of having to deal with dependent variables in classification and regression using techniques based on Bayes' theorem is often avoided by assuming a strong independence between them, hence such techniques are said to be naive. While analytical solutions supporting classification on arbitrary amounts of discrete and continuous random variables exist, practical solutions are scarce. We are evaluating a few Bayesian models empirically and consider their computational complexity. To overcome the often assumed independence, those models attempt to resolve the dependencies using empirical joint conditional probabilities and joint conditional probability densities. These are obtained by posterior probabilities of the dependent variable after segmenting the dataset for each random variable's value. We demonstrate the advantages of these models, such as their nature being deterministic (no randomization or weights required), that no training is required, that each random variable may have any kind of probability distribution, how robustness is upheld without having to impute missing data, and that online learning is effortlessly possible. We compare such Bayesian models against well-established classifiers and regression models, using some well-known datasets. We conclude that our evaluated models can outperform other models in certain settings, using classification. The regression models deliver respectable performance, without leading the field.
  •  
8.
  • Hönel, Sebastian, et al. (author)
  • Contextual Operationalization of Metrics as Scores : Is My Metric Value Good?
  • 2022
  • In: Proceedings of the 2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS). - : IEEE. - 9781665477048 ; , s. 333-343
  • Conference paper (peer-reviewed)abstract
    • Software quality models aggregate metrics to indicate quality. Most metrics reflect counts derived from events or attributes that cannot directly be associated with quality. Worse, what constitutes a desirable value for a metric may vary across contexts. We demonstrate an approach to transforming arbitrary metrics into absolute quality scores by leveraging metrics captured from similar contexts. In contrast to metrics, scores represent freestanding quality properties that are also comparable. We provide a web-based tool for obtaining contextualized scores for metrics as obtained from one’s software. Our results indicate that significant differences among various metrics and contexts exist. The suggested approach works with arbitrary contexts. Given sufficient contextual information, it allows for answering the question of whether a metric value is good/bad or common/extreme.
  •  
9.
  • Hönel, Sebastian (author)
  • Efficient Automatic Change Detection in Software Maintenance and Evolutionary Processes
  • 2020
  • Licentiate thesis (other academic/artistic)abstract
    • Software maintenance is such an integral part of its evolutionary process that it consumes much of the total resources available. Some estimate the costs of maintenance to be up to 100 times the amount of developing a software. A software not maintained builds up technical debt, and not paying off that debt timely will eventually outweigh the value of the software, if no countermeasures are undertaken. A software must adapt to changes in its environment, or to new and changed requirements. It must further receive corrections for emerging faults and vulnerabilities. Constant maintenance can prepare a software for the accommodation of future changes.While there may be plenty of rationale for future changes, the reasons behind historical changes may not be accessible longer. Understanding change in software evolution provides valuable insights into, e.g., the quality of a project, or aspects of the underlying development process. These are worth exploiting, for, e.g., fault prediction, managing the composition of the development team, or for effort estimation models. The size of software is a metric often used in such models, yet it is not well-defined. In this thesis, we seek to establish a robust, versatile and computationally cheap metric, that quantifies the size of changes made during maintenance. We operationalize this new metric and exploit it for automated and efficient commit classification.Our results show that the density of a commit, that is, the ratio between its net- and gross-size, is a metric that can replace other, more expensive metrics in existing classification models. Models using this metric represent the current state of the art in automatic commit classification. The density provides a more fine-grained and detailed insight into the types of maintenance activities in a software project.Additional properties of commits, such as their relation or intermediate sojourn-times, have not been previously exploited for improved classification of changes. We reason about the potential of these, and suggest and implement dependent mixture- and Bayesian models that exploit joint conditional densities, models that each have their own trade-offs with regard to computational cost and complexity, and prediction accuracy. Such models can outperform well-established classifiers, such as Gradient Boosting Machines.All of our empirical evaluation comprise large datasets, software and experiments, all of which we have published alongside the results as open-access. We have reused, extended and created datasets, and released software packages for change detection and Bayesian models used for all of the studies conducted.
  •  
10.
  • Hönel, Sebastian, et al. (author)
  • Importance and Aptitude of Source code Density for Commit Classification into Maintenance Activities
  • 2019
  • In: 2019 IEEE 19th International Conference on Software Quality, Reliability and Security (QRS). - : IEEE. - 9781728139272 - 9781728139289 ; , s. 109-120
  • Conference paper (peer-reviewed)abstract
    • Commit classification, the automatic classification of the purpose of changes to software, can support the understanding and quality improvement of software and its development process. We introduce code density of a commit, a measure of the net size of a commit, as a novel feature and study how well it is suited to determine the purpose of a change. We also compare the accuracy of code-density-based classifications with existing size-based classifications. By applying standard classification models, we demonstrate the significance of code density for the accuracy of commit classification. We achieve up to 89% accuracy and a Kappa of 0.82 for the cross-project commit classification where the model is trained on one project and applied to other projects. Such highly accurate classification of the purpose of software changes helps to improve the confidence in software (process) quality analyses exploiting this classification information.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 52
Type of publication
conference paper (37)
journal article (7)
editorial proceedings (2)
other publication (2)
doctoral thesis (2)
book chapter (1)
show more...
licentiate thesis (1)
show less...
Type of content
peer-reviewed (46)
other academic/artistic (6)
Author/Editor
Ericsson, Morgan, Do ... (33)
Wingkvist, Anna, PhD ... (27)
Löwe, Welf (20)
Ericsson, Morgan, 19 ... (19)
Wingkvist, Anna, 197 ... (15)
Olsson, Tobias, 1974 ... (15)
show more...
Hönel, Sebastian (12)
Ulan, Maria (8)
Weyns, Danny (4)
Martins, Rafael Mess ... (4)
Steghöfer, Jan-Phili ... (2)
Alégroth, Emil, 1984 ... (2)
Burden, Håkan, 1976 (2)
Knauss, Eric, 1977 (2)
Wingkvist, Anna (2)
Kerren, Andreas, Dr. ... (2)
Kucher, Kostiantyn, ... (2)
Picha, Petr (2)
Brada, Premek (2)
Pllana, Sabri (1)
Olsson, Tobias (1)
Andersson, Jesper, 1 ... (1)
Olsson, T (1)
Hammouda, Imed (1)
Ambrosius, Robin (1)
Caporuscio, Mauro, 1 ... (1)
Toll, Daniel, 1978- (1)
Toll, Daniel (1)
Thornadtsson, Johan (1)
Löwe, Welf, Professo ... (1)
Hammouda, Imed, 1953 (1)
Danek, Jakub (1)
Brada, Přemek, Assoc ... (1)
Staron, Miroslaw, Ph ... (1)
Kopetschny, Chris (1)
Löwe, Welf, 1969- (1)
Toll, D. (1)
Wingkvist, A. (1)
Herold, Sebastian, D ... (1)
Frejdestedt, Frans (1)
Hulth, Anna-Karin (1)
show less...
University
Linnaeus University (49)
University of Gothenburg (5)
Chalmers University of Technology (5)
Uppsala University (1)
Linköping University (1)
RISE (1)
Language
English (52)
Research subject (UKÄ/SCB)
Natural sciences (49)
Engineering and Technology (2)
Social Sciences (2)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view