SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Madeiral Fernanda) "

Sökning: WFRF:(Madeiral Fernanda)

  • Resultat 1-8 av 8
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ebert, F., et al. (författare)
  • Message from the AeSIR 2021 Chairs
  • 2021
  • Konferensbidrag (refereegranskat)abstract
    • It is our pleasure to welcome you to the first edition of the International Workshop on Automated Support to Improve code Readability (AeSIR) which is being virtually held and co-located with the 36th IEEE/ACM International Conference on Automated Software Engineering (ASE 2021). Reading and understanding code is essential to implement new features in an existing system, refactor, debug, write tests, and perform code reviews. Developers spend large amounts of time reading code and making the code easier to read and understand is an important goal with potential practical impact. In this context, automatically measuring and improving legibility, readability, and understandability is of primary importance to help developers addressing program comprehension issues. The Workshop on Automated Support to Improve code Readability (AeSIR) aims at providing a forum for researchers and practitioners to discuss both new approaches and emerging results related to such aspects. In this first edition of AeSIR, we have 4 accepted papers which address novel tools and approaches for automatically measuring and improving code legibility, readability, and understandability. As organizers, we would like to thank the authors of all the submitted papers, the program committee members, the ASE workshop chairs and the conference organizers, who have all contributed to the success of this workshop!
  •  
2.
  • Etemadi, Khashayar, et al. (författare)
  • Augmenting Diffs With Runtime Information
  • 2023
  • Ingår i: IEEE Transactions on Software Engineering. - : Institute of Electrical and Electronics Engineers (IEEE). - 0098-5589 .- 1939-3520. ; 49:11, s. 4988-5007
  • Tidskriftsartikel (refereegranskat)abstract
    • Source code diffs are used on a daily basis as part of code review, inspection, and auditing. To facilitate understanding, they are typically accompanied by explanations that describe the essence of what is changed in the program. As manually crafting high-quality explanations is a cumbersome task, researchers have proposed automatic techniques to generate code diff explanations. Existing explanation generation methods solely focus on static analysis, i.e., they do not take advantage of runtime information to explain code changes. In this article, we propose Collector-Sahab, a novel tool that augments code diffs with runtime difference information. Collector-Sahab compares the program states of the original (old) and patched (new) versions of a program to find unique variable values. Then, Collector-Sahab adds this novel runtime information to the source code diff as shown, for instance, in code reviewing systems. As an evaluation, we run Collector-Sahab on 584 code diffs for Defects4J bugs and find it successfully augments the code diff for 95% (555/584) of them. We also perform a user study and ask eight participants to score the augmented code diffs generated by Collector-Sahab. Per this user study, we conclude that developers find the idea of adding runtime data to code diffs promising and useful. Overall, our experiments show the effectiveness and usefulness of Collector-Sahab in augmenting code diffs with runtime difference information. Publicly-available repository: https://github.com/ASSERT-KTH/collector-sahab.
  •  
3.
  • Etemadi, Khashayar, et al. (författare)
  • Sorald : Automatic Patch Suggestions for SonarQube Static Analysis Violations
  • 2022
  • Ingår i: IEEE Transactions on Dependable and Secure Computing. - : Institute of Electrical and Electronics Engineers (IEEE). - 1545-5971 .- 1941-0018. ; , s. 1-1
  • Tidskriftsartikel (refereegranskat)abstract
    • Previous work has shown that early resolution of issues detected by static code analyzers can prevent major costs later on. However, developers often ignore such issues for two main reasons. First, many issues should be interpreted to determine if they correspond to actual flaws in the program. Second, static analyzers often do not present the issues in a way that is actionable. To address these problems, we present Sorald: a novel system that uses metaprogramming templates to transform the abstract syntax trees of programs and suggests fixes for static analysis warnings. Thus, the burden on the developer is reduced from interpreting and fixing static issues, to inspecting and approving full fledged solutions. Sorald fixes violations of 10 rules from SonarJava, one of the most widely used static analyzers for Java. We evaluate Sorald on a dataset of 161 popular repositories on Github. Our analysis shows the effectiveness of Sorald as it fixes 65% (852/1,307) of the violations that meets the repair preconditions. Overall, our experiments show it is possible to automatically fix notable violations of the static analysis rules produced by the state-of-the-art static analyzer SonarJava.
  •  
4.
  • Loriot, Benjamin, et al. (författare)
  • Styler : learning formatting conventions to repair Checkstyle violations
  • 2022
  • Ingår i: Empirical Software Engineering. - : Springer Nature. - 1382-3256 .- 1573-7616. ; 27:6
  • Tidskriftsartikel (refereegranskat)abstract
    • Ensuring the consistent usage of formatting conventions is an important aspect of modern software quality assurance. To do so, the source code of a project should be checked against the formatting conventions (or rules) adopted by its development team, and then the detected violations should be repaired if any. While the former task can be automatically done by format checkers implemented in linters, there is no satisfactory solution for the latter. Manually fixing formatting convention violations is a waste of developer time and code formatters do not take into account the conventions adopted and configured by developers for the used linter. In this paper, we present Styler, a tool dedicated to fixing formatting rule violations raised by format checkers using a machine learning approach. For a given project, Styler first generates training data by injecting violations of the project-specific rules in violation-free source code files. Then, it learns fixes by feeding long short-term memory neural networks with the training data encoded into token sequences. Finally, it predicts fixes for real formatting violations with the trained models. Currently, Styler supports a single checker, Checkstyle, which is a highly configurable and popular format checker for Java. In an empirical evaluation, Styler repaired 41% of 26,791 Checkstyle violations mined from 104 GitHub projects. Moreover, we compared Styler with the IntelliJ plugin CheckStyle-IDEA and the machine-learning-based code formatters Naturalize and CodeBuff. We found out that Styler fixes violations of a diverse set of Checkstyle rules (24/25 rules), generates smaller repairs in comparison to the other systems, and predicts repairs in seconds once trained on a project. Through a manual analysis, we identified cases in which Styler does not succeed to generate correct repairs, which can guide further improvements in Styler. Finally, the results suggest that Styler can be useful to help developers repair Checkstyle formatting violations.
  •  
5.
  • Madeiral Delfim, Fernanda, et al. (författare)
  • A large-scale study on human-cloned changes for automated program repair
  • 2021
  • Ingår i: 2021 IEEE/ACM 18Th International Conference On Mining Software Repositories (Msr 2021). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 510-514
  • Konferensbidrag (refereegranskat)abstract
    • Research in automatic program repair has shown that real bugs can be automatically fixed. However, there are several challenges involved in such a task that are not yet fully addressed. As an example, consider that a test-suite-based repair tool performs a change in a program to fix a bug spotted by a failing test case, but then the same or another test case fails. This could mean that the change is a partial fix for the bug or that another bug was manifested. However, the repair tool discards the change and possibly performs other repair attempts. One might wonder if the applied change should be also applied in other locations in the program so that the bug is fully fixed. In this paper, we are interested in investigating the extent of bug fix changes being cloned by developers within patches. Our goal is to investigate the need of multi-location repair by using identical or similar changes in identical or similar contexts. To do so, we analyzed 3,049 multi-hunk patches from the ManySStuBs4J dataset, which is a large dataset of single statement bug fix changes. We found out that 68% of the multi-hunk patches contain at least one change clone group. Moreover, most of these patches (70%) are strictly-cloned ones, which are patches fully composed of changes belonging to one single change clone group. Finally, most of the strictly-cloned patches (89%) contain change clones with identical changes, independently of their contexts. We conclude that automated solutions for creating patches composed of identical or similar changes can be useful for fixing bugs.
  •  
6.
  •  
7.
  • Oliveira, D., et al. (författare)
  • Evaluating Code Readability and Legibility : An Examination of Human-centric Studies
  • 2020
  • Ingår i: Proceedings - 2020 IEEE International Conference on Software Maintenance and Evolution, ICSME 2020. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 348-359
  • Konferensbidrag (refereegranskat)abstract
    • Reading code is an essential activity in software maintenance and evolution. Several studies with human subjects have investigated how different factors, such as the employed programming constructs and naming conventions, can impact code readability, i.e., what makes a program easier or harder to read and apprehend by developers, and code legibility, i.e., what influences the ease of identifying elements of a program. These studies evaluate readability and legibility by means of different comprehension tasks and response variables. In this paper, we examine these tasks and variables in studies that compare programming constructs, coding idioms, naming conventions, and formatting guidelines, e.g., recursive vs. iterative code. To that end, we have conducted a systematic literature review where we found 54 relevant papers. Most of these studies evaluate code readability and legibility by measuring the correctness of the subjects' results (83.3%) or simply asking their opinions (55.6%). Some studies (16.7%) rely exclusively on the latter variable. There are still few studies that monitor subjects' physical signs, such as brain activation regions (5%). Moreover, our study shows that some variables are multi-faceted. For instance, correctness can be measured as the ability to predict the output of a program, answer questions about its behavior, or recall parts of it. These results make it clear that different evaluation approaches require different competencies from subjects, e.g., tracing the program vs. summarizing its goal vs. memorizing its text. To assist researchers in the design of new studies and improve our comprehension of existing ones, we model program comprehension as a learning activity by adapting a preexisting learning taxonomy. This adaptation indicates that some competencies, e.g., tracing, are often exercised in these evaluations whereas others, e.g., relating similar code snippets, are rarely targeted.
  •  
8.
  • Sobreira, Victor, et al. (författare)
  • Dissection of a bug dataset : Anatomy of 395 patches from Defects4J
  • 2018
  • Ingår i: 25th IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER 2018 - Proceedings. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 130-140
  • Konferensbidrag (refereegranskat)abstract
    • Well-designed and publicly available datasets of bugs are an invaluable asset to advance research fields such as fault localization and program repair as they allow directly and fairly comparison between competing techniques and also the replication of experiments. These datasets need to be deeply understood by researchers: The answer for questions like 'which bugs can my technique handle?' and 'for which bugs is my technique effective?' depends on the comprehension of properties related to bugs and their patches. However, such properties are usually not included in the datasets, and there is still no widely adopted methodology for characterizing bugs and patches. In this work, we deeply study 395 patches of the Defects4J dataset. Quantitative properties (patch size and spreading) were automatically extracted, whereas qualitative ones (repair actions and patterns) were manually extracted using a thematic analysis-based approach. We found that 1) the median size of Defects4J patches is four lines, and almost 30% of the patches contain only addition of lines; 2) 92% of the patches change only one file, and 38% has no spreading at all; 3) the top-3 most applied repair actions are addition of method calls, conditionals, and assignments, occurring in 77% of the patches; and 4) nine repair patterns were found for 95% of the patches, where the most prevalent, appearing in 43% of the patches, is on conditional blocks. These results are useful for researchers to perform advanced analysis on their techniques' results based on Defects4J. Moreover, our set of properties can be used to characterize and compare different bug datasets.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-8 av 8

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy