SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Madeiral F.) "

Sökning: WFRF:(Madeiral F.)

  • Resultat 1-3 av 3
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ebert, F., et al. (författare)
  • Message from the AeSIR 2021 Chairs
  • 2021
  • Konferensbidrag (refereegranskat)abstract
    • It is our pleasure to welcome you to the first edition of the International Workshop on Automated Support to Improve code Readability (AeSIR) which is being virtually held and co-located with the 36th IEEE/ACM International Conference on Automated Software Engineering (ASE 2021). Reading and understanding code is essential to implement new features in an existing system, refactor, debug, write tests, and perform code reviews. Developers spend large amounts of time reading code and making the code easier to read and understand is an important goal with potential practical impact. In this context, automatically measuring and improving legibility, readability, and understandability is of primary importance to help developers addressing program comprehension issues. The Workshop on Automated Support to Improve code Readability (AeSIR) aims at providing a forum for researchers and practitioners to discuss both new approaches and emerging results related to such aspects. In this first edition of AeSIR, we have 4 accepted papers which address novel tools and approaches for automatically measuring and improving code legibility, readability, and understandability. As organizers, we would like to thank the authors of all the submitted papers, the program committee members, the ASE workshop chairs and the conference organizers, who have all contributed to the success of this workshop!
  •  
2.
  • Madeiral, F., et al. (författare)
  • BEARS : An Extensible Java Bug Benchmark for Automatic Program Repair Studies
  • 2019
  • Ingår i: SANER 2019 - Proceedings of the 2019 IEEE 26th International Conference on Software Analysis, Evolution, and Reengineering. - : Institute of Electrical and Electronics Engineers (IEEE). - 9781728105918 ; , s. 468-478
  • Konferensbidrag (refereegranskat)abstract
    • Benchmarks of bugs are essential to empirically evaluate automatic program repair tools. In this paper, we present BEARS, a project for collecting and storing bugs into an extensible bug benchmark for automatic repair studies in Java. The collection of bugs relies on commit building state from Continuous Integration (CI) to find potential pairs of buggy and patched program versions from open-source projects hosted on GitHub. Each pair of program versions passes through a pipeline where an attempt of reproducing a bug and its patch is performed. The core step of the reproduction pipeline is the execution of the test suite of the program on both program versions. If a test failure is found in the buggy program version candidate and no test failure is found in its patched program version candidate, a bug and its patch were successfully reproduced. The uniqueness of Bears is the usage of CI (builds) to identify buggy and patched program version candidates, which has been widely adopted in the last years in open-source projects. This approach allows us to collect bugs from a diversity of projects beyond mature projects that use bug tracking systems. Moreover, BEARS was designed to be publicly available and to be easily extensible by the research community through automatic creation of branches with bugs in a given GitHub repository, which can be used for pull requests in the BEARS repository. We present in this paper the approach employed by BEARS, and we deliver the version 1.0 of BEARS, which contains 251 reproducible bugs collected from 72 projects that use the Travis CI and Maven build environment.
  •  
3.
  • Oliveira, D., et al. (författare)
  • Evaluating Code Readability and Legibility : An Examination of Human-centric Studies
  • 2020
  • Ingår i: Proceedings - 2020 IEEE International Conference on Software Maintenance and Evolution, ICSME 2020. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 348-359
  • Konferensbidrag (refereegranskat)abstract
    • Reading code is an essential activity in software maintenance and evolution. Several studies with human subjects have investigated how different factors, such as the employed programming constructs and naming conventions, can impact code readability, i.e., what makes a program easier or harder to read and apprehend by developers, and code legibility, i.e., what influences the ease of identifying elements of a program. These studies evaluate readability and legibility by means of different comprehension tasks and response variables. In this paper, we examine these tasks and variables in studies that compare programming constructs, coding idioms, naming conventions, and formatting guidelines, e.g., recursive vs. iterative code. To that end, we have conducted a systematic literature review where we found 54 relevant papers. Most of these studies evaluate code readability and legibility by measuring the correctness of the subjects' results (83.3%) or simply asking their opinions (55.6%). Some studies (16.7%) rely exclusively on the latter variable. There are still few studies that monitor subjects' physical signs, such as brain activation regions (5%). Moreover, our study shows that some variables are multi-faceted. For instance, correctness can be measured as the ability to predict the output of a program, answer questions about its behavior, or recall parts of it. These results make it clear that different evaluation approaches require different competencies from subjects, e.g., tracing the program vs. summarizing its goal vs. memorizing its text. To assist researchers in the design of new studies and improve our comprehension of existing ones, we model program comprehension as a learning activity by adapting a preexisting learning taxonomy. This adaptation indicates that some competencies, e.g., tracing, are often exercised in these evaluations whereas others, e.g., relating similar code snippets, are rarely targeted.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-3 av 3
Typ av publikation
konferensbidrag (3)
Typ av innehåll
refereegranskat (3)
Författare/redaktör
Madeiral Delfim, Fer ... (2)
Castor, F. (2)
Monperrus, Martin (1)
Bruno, R. (1)
Ebert, F. (1)
Scalabrino, S. (1)
visa fler...
Oliveto, R. (1)
Oliveira, D (1)
Madeiral, F. (1)
Urli, S. (1)
Maia, M. (1)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (3)
Språk
Engelska (3)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (3)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy