SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Kommrusch Steve) "

Sökning: WFRF:(Kommrusch Steve)

  • Resultat 1-3 av 3
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Baudry, Benoit, et al. (författare)
  • A Software-Repair Robot Based on Continual Learning
  • 2021
  • Ingår i: IEEE Software. - : Institute of Electrical and Electronics Engineers (IEEE). - 0740-7459 .- 1937-4194. ; 38:4, s. 28-35
  • Tidskriftsartikel (refereegranskat)abstract
    • Software bugs are common, and correcting them accounts for a significant portion of the costs in the software development and maintenance process. In this article, we discuss R-Hero, our novel system for learning how to fix bugs based on continual training.
  •  
2.
  • Chen, Zimin, et al. (författare)
  • Neural Transfer Learning for Repairing Security Vulnerabilities in C Code
  • 2023
  • Ingår i: IEEE Transactions on Software Engineering. - : Institute of Electrical and Electronics Engineers (IEEE). - 0098-5589 .- 1939-3520. ; 49:1, s. 147-165
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, we address the problem of automatic repair of software vulnerabilities with deep learning. The major problem with data-driven vulnerability repair is that the few existing datasets of known confirmed vulnerabilities consist of only a few thousand examples. However, training a deep learning model often requires hundreds of thousands of examples. In this work, we leverage the intuition that the bug fixing task and the vulnerability fixing task are related and that the knowledge learned from bug fixes can be transferred to fixing vulnerabilities. In the machine learning community, this technique is called transfer learning. In this paper, we propose an approach for repairing security vulnerabilities named VRepair which is based on transfer learning. VRepair is first trained on a large bug fix corpus and is then tuned on a vulnerability fix dataset, which is an order of magnitude smaller. In our experiments, we show that a model trained only on a bug fix corpus can already fix some vulnerabilities. Then, we demonstrate that transfer learning improves the ability to repair vulnerable C functions. We also show that the transfer learning model performs better than a model trained with a denoising task and fine-tuned on the vulnerability fixing task. To sum up, this paper shows that transfer learning works well for repairing security vulnerabilities in C compared to learning on a small dataset.
  •  
3.
  • Kommrusch, Steve, et al. (författare)
  • Self-Supervised Learning to Prove Equivalence Between Straight-Line Programs via Rewrite Rules
  • 2023
  • Ingår i: IEEE Transactions on Software Engineering. - : Institute of Electrical and Electronics Engineers (IEEE). - 0098-5589 .- 1939-3520. ; 49:7, s. 3771-3792
  • Tidskriftsartikel (refereegranskat)abstract
    • We target the problem of automatically synthesizing proofs of semantic equivalence between two programs made of sequences of statements. We represent programs using abstract syntax trees (AST), where a given set of semantics-preserving rewrite rules can be applied on a specific AST pattern to generate a transformed and semantically equivalent program. In our system, two programs are equivalent if there exists a sequence of application of these rewrite rules that leads to rewriting one program into the other. We propose a neural network architecture based on a transformer model to generate proofs of equivalence between program pairs. The system outputs a sequence of rewrites, and the validity of the sequence is simply checked by verifying it can be applied. If no valid sequence is produced by the neural network, the system reports the programs as non-equivalent, ensuring by design no programs may be incorrectly reported as equivalent. Our system is fully implemented for one single grammar which can represent straight-line programs with function calls and multiple types. To efficiently train the system to generate such sequences, we develop an original incremental training technique, named self-supervised sample selection. We extensively study the effectiveness of this novel training approach on proofs of increasing complexity and length. Our system,S4Eq, achieves 97% proof success on a curated dataset of 10,000 pairs of equivalent programs.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-3 av 3

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy