SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Horkhoff Jennifer) "

Search: WFRF:(Horkhoff Jennifer)

  • Result 1-7 of 7
Sort/group result
   
EnumerationReferenceCoverFind
1.
  •  
2.
  • Alahyari, H., et al. (author)
  • What Do Agile Teams Find Important for their Success?
  • 2018
  • In: 2018 25th Asia-Pacific Software Engineering Conference (APSEC). - : IEEE. - 9781728119700
  • Conference paper (peer-reviewed)abstract
    • Although the general benefits of agile methods have been shown, it is not always clear what makes the application of agile successful or not in a company. With this motivation, we investigate agile success factors, particularly from the viewpoint of teams. We conduct in-company surveys to collect and rank agile team success factors, comparing these results with success factors found in the literature. Our results introduce new success factors not previously discussed in related work. The findings emphasize the importance of team environment, team spirit, and team capability as opposed to previous work which emphasizes project management process and customer involvement. These findings can help find issues and improve the performance of agile teams.
  •  
3.
  • Habibullah, Khan Mohammad, et al. (author)
  • Non-Functional Requirements for Machine Learning: An Exploration of System Scope and Interest
  • 2022
  • In: Proceedings - Workshop on Software Engineering for Responsible AI, SE4RAI 2022. - New York, NY, USA : ACM. ; , s. 29-36
  • Conference paper (peer-reviewed)abstract
    • Systems that rely on Machine Learning (ML systems) have differing demands on quality - non-functional requirements (NFRs) - compared to traditional systems. NFRs for ML systems may differ in their definition, scope, and importance. Despite the importance of NFRs for ML systems, our understanding of their definitions and scope - and of the extent of existing research - is lacking compared to our understanding in traditional domains.Building on an investigation into importance and treatment of ML system NFRs in industry, we make three contributions towards narrowing this gap: (1) we present clusters of ML system NFRs based on shared characteristics, (2) we use Scopus search results - as well as inter-coder reliability on a sample of NFRs - to estimate the number of relevant studies on a subset of the NFRs, and (3), we use our initial reading of titles and abstracts in each sample to define the scope of NFRs over parts of the system (e.g., training data, ML model). These initial findings form the groundwork for future research in this emerging domain.CCS CONCEPTS • Software and its engineering → Extra-functional properties; Requirements analysis; • Computing methodologies → Machine learning.
  •  
4.
  • Heckmann Barbalho de Figueroa, Laiz, et al. (author)
  • A Modeling Approach for Bioinformatics Workflows
  • 2019
  • In: The Practice of Enterprise Modeling - 12th {IFIP} Working Conference, PoEM 2019, Luxembourg, Luxembourg, November 27-29, 2019, Proceedings. - Cham : Springer.
  • Conference paper (peer-reviewed)
  •  
5.
  •  
6.
  • Ronanki, Krishna, 1997, et al. (author)
  • RE-centric Recommendations for the Development of Trustworthy(er) Autonomous Systems
  • 2023
  • In: ACM International Conference Proceeding Series.
  • Conference paper (peer-reviewed)abstract
    • Complying with the EU AI Act (AIA) guidelines while developing and implementing AI systems will soon be mandatory within the EU. However, practitioners lack actionable instructions to operationalise ethics during AI systems development. A literature review of different ethical guidelines revealed inconsistencies in the principles addressed and the terminology used to describe them. Furthermore, requirements engineering (RE), which is identified to foster trustworthiness in the AI development process from the early stages was observed to be absent in a lot of frameworks that support the development of ethical and trustworthy AI. This incongruous phrasing combined with a lack of concrete development practices makes trustworthy AI development harder. To address these concerns, we formulated a comparison table for the terminology used and the coverage of the ethical AI principles in major ethical AI guidelines. We then examined the applicability of ethical AI development frameworks for performing effective RE during the development of trustworthy AI systems. A tertiary review and meta-analysis of literature discussing ethical AI frameworks revealed their limitations when developing trustworthy AI. Based on our findings, we propose recommendations to address such limitations during the development of trustworthy AI.
  •  
7.
  • Scandariato, Riccardo, 1975, et al. (author)
  • Generative secure design, defined
  • 2018
  • In: Proceedings - International Conference on Software Engineering. - New York, NY, USA : ACM. - 0270-5257. ; Part F137347, s. 1-4
  • Conference paper (peer-reviewed)abstract
    • In software-intensive industries, companies face the constant challenge of not having enough security experts on staff in order to validate the design of the high-complexity projects they run. Many of these companies are now realizing that increasing automation in their secure development process is the only way forward in order to cope with the ultra-large scale of modern systems. This paper embraces that viewpoint. We chart the roadmap to the development of a generative design tool that iteratively produces several design alternatives, each attempting to solve the security goals by incorporating security mechanisms. The tool explores the possible solutions by starting from well-known security techniques and by creating variations via mutations and crossovers. By incorporating user feedback, the tool generates increasingly better design alternatives.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-7 of 7

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view