SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Zamli Kamal Z.) "

Search: WFRF:(Zamli Kamal Z.)

  • Result 1-10 of 17
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Ahmed, Bestoun S., 1982-, et al. (author)
  • An evaluation of Monte Carlo-based hyper-heuristic for interaction testing of industrial embedded software applications
  • 2020
  • In: Soft Computing - A Fusion of Foundations, Methodologies and Applications. - : Springer. - 1432-7643 .- 1433-7479. ; 24:18, s. 13929-13954
  • Journal article (peer-reviewed)abstract
    • Hyper-heuristic is a new methodology for the adaptive hybridization of meta-heuristic algorithms to derive a general algorithm for solving optimization problems. This work focuses on the selection type of hyper-heuristic, called the exponential Monte Carlo with counter (EMCQ). Current implementations rely on the memory-less selection that can be counterproductive as the selected search operator may not (historically) be the best performing operator for the current search instance. Addressing this issue, we propose to integrate the memory into EMCQ for combinatorial t-wise test suite generation using reinforcement learning based on the Q-learning mechanism, called Q-EMCQ. The limited application of combinatorial test generation on industrial programs can impact the use of such techniques as Q-EMCQ. Thus, there is a need to evaluate this kind of approach against relevant industrial software, with a purpose to show the degree of interaction required to cover the code as well as finding faults. We applied Q-EMCQ on 37 real-world industrial programs written in Function Block Diagram (FBD) language, which is used for developing a train control management system at Bombardier Transportation Sweden AB. The results show that Q-EMCQ is an efficient technique for test case generation. Addition- ally, unlike the t-wise test suite generation, which deals with the minimization problem, we have also subjected Q-EMCQ to a maximization problem involving the general module clustering to demonstrate the effectiveness of our approach. The results show the Q-EMCQ is also capable of outperforming the original EMCQ as well as several recent meta/hyper-heuristic including modified choice function, Tabu high-level hyper-heuristic, teaching learning-based optimization, sine cosine algorithm, and symbiotic optimization search in clustering quality within comparable execution time.
  •  
2.
  • Ahmed, Bestoun S., 1982-, et al. (author)
  • Code-Aware Combinatorial Interaction Testing
  • 2019
  • In: IET Software. - London, England : Institution of Engineering and Technology. - 1751-8806 .- 1751-8814. ; 13:6, s. 600-609
  • Journal article (peer-reviewed)abstract
    • Combinatorial interaction testing (CIT) is a useful testing technique to address the interaction of input parameters in software systems. In many applications, the technique has been used as a systematic sampling technique to sample the enormous possibilities of test cases. In the last decade, most of the research activities focused on the generation of CIT test suites as it is a computationally complex problem. Although promising, less effort has been paid for the application of CIT. In general, to apply the CIT, practitioners must identify the input parameters for the Software-under-test (SUT), feed these parameters to the CIT tool to generate the test suite, and then run those tests on the application with some pass and fail criteria for verification. Using this approach, CIT is used as a black-box testing technique without knowing the effect of the internal code. Although useful, practically, not all the parameters having the same impact on the SUT. This paper introduces a different approach to use the CIT as a gray-box testing technique by considering the internal code structure of the SUT to know the impact of each input parameter and thus use this impact in the test generation stage. We applied our approach to five reliable case studies. The results showed that this approach would help to detect new faults as compared to the equal impact parameter approach.
  •  
3.
  • Ahmed, Bestoun S., 1982-, et al. (author)
  • Constrained interaction testing : A systematic literature study
  • 2017
  • In: IEEE Access. - Sweden : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 2169-3536. ; 5, s. 25706-25730
  • Journal article (peer-reviewed)abstract
    • Interaction testing can be used to effectively detect faults that are otherwise difficult to find by other testing techniques. However, in practice, the input configurations of software systems are subjected to constraints, especially in the case of highly configurable systems. Handling constraints effectively and efficiently in combinatorial interaction testing is a challenging problem. Nevertheless, researchers have attacked this challenge through different techniques, and much progress has been achieved in the past decade. Thus, it is useful to reflect on the current achievements and shortcomings and to identify potential areas of improvements. This paper presents the first comprehensive and systematic literature study to structure and categorize the research contributions for constrained interaction testing. Following the guidelines of conducting a literature study, the relevant data are extracted from a set of 103 research papers belonging to constrained interaction testing. The topics addressed in constrained interaction testing research are classified into four categories of constraint test generation, application, generation and application, and model validation studies. The papers within each of these categories are extensively reviewed. Apart from answering several other research questions, this paper also discusses the applications of constrained interaction testing in several domains, such as software product lines, fault detection and characterization, test selection, security, and graphical user interface testing. This paper ends with a discussion of limitations, challenges, and future work in the area.
  •  
4.
  • Ahmed, Bestoun S., 1982-, et al. (author)
  • Handling constraints in combinatorial interaction testing in the presence of multi objective particle swarm and multithreading
  • 2017
  • In: Information and Software Technology. - : Elsevier. - 0950-5849 .- 1873-6025. ; 86, s. 20-36
  • Journal article (peer-reviewed)abstract
    • Context: Combinatorial testing strategies have lately received a lot of attention as a result of their diverse applications. In its simple form, a combinatorial strategy can reduce several input parameters (configurations) of a system into a small set based on their interaction (or combination). In practice, the input configurations of software systems are subjected to constraints, especially in case of highly configurable systems. To implement this feature within a strategy, many difficulties arise for construction. While there are many combinatorial interaction testing strategies nowadays, few of them support constraints. Objective: This paper presents a new strategy, to construct combinatorial interaction test suites in the presence of constraints. Method: The design and algorithms are provided in detail. To overcome the multi-judgement criteria for an optimal solution, the multi-objective particle swarm optimisation and multithreading are used. The strategy and its associated algorithms are evaluated extensively using different benchmarks and comparisons. Results: Our results are promising as the evaluation results showed the efficiency and performance of each algorithm in the strategy. The benchmarking results also showed that the strategy can generate constrained test suites efficiently as compared to state-of-the-art strategies. Conclusion: The proposed strategy can form a new way for constructing of constrained combinatorial interaction test suites. The strategy can form a new and effective base for future implementations. (C) 2017 Elsevier B.V. All rights reserved.
  •  
5.
  • Ahmed, Bestoun S., 1982-, et al. (author)
  • Optimum Design of (PID mu)-D-lambda controller for an automatic voltage regulator system using combinatorial test design
  • 2016
  • In: PLOS ONE. - : PUBLIC LIBRARY OF SCIENCE. - 1932-6203. ; 11:11
  • Journal article (peer-reviewed)abstract
    • Combinatorial test design is a plan of test that aims to reduce the amount of test cases systematically by choosing a subset of the test cases based on the combination of input variables. The subset covers all possible combinations of a given strength and hence tries to match the effectiveness of the exhaustive set. This mechanism of reduction has been used successfully in software testing research with t-way testing (where t indicates the interaction strength of combinations). Potentially, other systems may exhibit many similarities with this approach. Hence, it could form an emerging application in different areas of research due to its usefulness. To this end, more recently it has been applied in a few research areas successfully. In this paper, we explore the applicability of combinatorial test design technique for Fractional Order (FO), Proportional-Integral-Derivative (PID) parameter design controller, named as FOPID, for an automatic voltage regulator (AVR) system. Throughout the paper, we justify this new application theoretically and practically through simulations. In addition, we report on first experiments indicating its practical use in this field. We design different algorithms and adapted other strategies to cover all the combinations with an optimum and effective test set. Our findings indicate that combinatorial test design can find the combinations that lead to optimum design. Besides this, we also found that by increasing the strength of combination, we can approach to the optimum design in a way that with only 4-way combinatorial set, we can get the effectiveness of an exhaustive test set. This significantly reduced the number of tests needed and thus leads to an approach that optimizes design of parameters quickly.
  •  
6.
  • Bures, Miroslav, et al. (author)
  • Prioritized Process Test : An Alternative to Current Process Testing Strategies
  • 2019
  • In: International journal of software engineering and knowledge engineering. - : World Scientific Publishing. - 0218-1940. ; 29:7, s. 997-1028
  • Journal article (peer-reviewed)abstract
    • Testing processes and workflows in information and Internet of Things systems is a major part of the typical software testing effort. Consistent and efficient path-based test cases are desired to support these tests. Because certain parts of software system workflows have a higher business priority than others, this fact has to be involved in the generation of test cases. In this paper, we propose a Prioritized Process Test (PPT), which is a model-based test case generation algorithm that represents an alternative to currently established algorithms that use directed graphs and test requirements to model the system under test. The PPT accepts a directed multigraph as a model to express priorities, and edge weights are used instead of test requirements. To determine the test-coverage level of test cases, a test-depth-level concept is used. We compared the presented PPT with five alternatives (i.e. the Process Cycle Test (PCT), a naive reduction of test set created by the PCT, Brute Force algorithm, Set-covering-Based Solution and Matching-based Prefix Graph Solution) for edge coverage and edge-pair coverage. To assess the optimality of the path-based test cases produced by these strategies, we used 14 metrics based on the properties of these test cases and 59 models that were created for three real-world systems. For all edge coverage, the PPT produced more optimal test cases than the alternatives in terms of the majority of the metrics. For edge-pair coverage, the PPT strategy yielded similar results to those of the alternatives. Thus, the PPT strategy is an applicable alternative as it reflects both the required test coverage level and the business priority in parallel.
  •  
7.
  • Hasan, Imad H., et al. (author)
  • Generation and Application of Constrained Interaction Test Suites Using Base Forbidden Tuples with a Mixed Neighborhood Tabu Search
  • 2020
  • In: International journal of software engineering and knowledge engineering. - Singapore : World Scientific. - 0218-1940. ; 30:3, s. 363-398
  • Journal article (peer-reviewed)abstract
    • To ensure the quality of current highly configurable software systems, intensive testing is needed to test all the configuration combinations and detect all the possible faults. This task becomes more challenging for most modern software systems when constraints are given for the configurations. Here, intensive testing is almost impossible, especially considering the additional computation required to resolve the constraints during the test generation process. In addition, this testing process is exhaustive and time-consuming. Combinatorial interaction strategies can systematically reduce the number of test cases to construct a minimal test suite without affecting the effectiveness of the tests. This paper presents a new efficient search-based strategy to generate constrained interaction test suites to cover all possible combinations. The paper also shows a new application of constrained interaction testing in software fault searches. The proposed strategy initially generates the set of all possible t-tuple combinations; then, it filters out the set by removing the forbidden t-tuples using the Base Forbidden Tuple (BFT) approach. The strategy also utilizes a mixed neighborhood tabu search (TS) to construct optimal or near-optimal constrained test suites. The efficiency of the proposed method is evaluated through a comparison against two well-known state-of-the-art tools. The evaluation consists of three sets of experiments for 35 standard benchmarks. Additionally, the effectiveness and quality of the results are assessed using a real-world case study. Experimental results show that the proposed strategy outperforms one of the competitive strategies, ACTS, for approximately 83% of the benchmarks and achieves similar results to CASA for 65% of the benchmarks when the interaction strength is 2. For an interaction strength of 3, the proposed method outperforms other competitive strategies for approximately 60% and 42% of the benchmarks. The proposed strategy can also generate constrained interaction test suites for an interaction strength of 4, which is not possible for many strategies. The real-world case study shows that the generated test suites can effectively detect injected faults using mutation testing. 
  •  
8.
  • Hujainah, Fadhl Mohammad Omar, 1987, et al. (author)
  • SRPTackle: A semi-automated requirements prioritisation technique for scalable requirements of software system projects
  • 2021
  • In: Information and Software Technology. - : Elsevier BV. - 0950-5849. ; 131
  • Journal article (peer-reviewed)abstract
    • Context Requirement prioritisation (RP) is often used to select the most important system requirements as perceived by system stakeholders. RP plays a vital role in ensuring the development of a quality system with defined constraints. However, a closer look at existing RP techniques reveals that these techniques suffer from some key challenges, such as scalability, lack of quantification, insufficient prioritisation of participating stakeholders, overreliance on the participation of professional expertise, lack of automation and excessive time consumption. These key challenges serve as the motivation for the present research. Objective This study aims to propose a new semiautomated scalable prioritisation technique called ‘SRPTackle’ to address the key challenges. Method SRPTackle provides a semiautomated process based on a combination of a constructed requirement priority value formulation function using a multi-criteria decision-making method (i.e. weighted sum model), clustering algorithms (K-means and K-means++) and a binary search tree to minimise the need for expert involvement and increase efficiency. The effectiveness of SRPTackle is assessed by conducting seven experiments using a benchmark dataset from a large actual software project. Results Experiment results reveal that SRPTackle can obtain 93.0% and 94.65% as minimum and maximum accuracy percentages, respectively. These values are better than those of alternative techniques. The findings also demonstrate the capability of SRPTackle to prioritise large-scale requirements with reduced time consumption and its effectiveness in addressing the key challenges in comparison with other techniques. Conclusion With the time effectiveness, ability to scale well with numerous requirements, automation and clear implementation guidelines of SRPTackle, project managers can perform RP for large-scale requirements in a proper manner, without necessitating an extensive amount of effort (e.g. tedious manual processes, need for the involvement of experts and time workload).
  •  
9.
  • Kader, Md. Abdul, et al. (author)
  • A systematic review on emperor penguin optimizer
  • 2021
  • In: Neural Computing & Applications. - : Springer. - 0941-0643 .- 1433-3058. ; 33:23, s. 15933-15953
  • Journal article (peer-reviewed)abstract
    • Emperor Penguin Optimizer (EPO) is a recently developed metaheuristic algorithm to solve general optimization problems. The main strength of EPO is twofold. Firstly, EPO has low learning curve (i.e., based on the simple analogy of huddling behavior of emperor penguins in nature (i.e., surviving strategy during Antarctic winter). Secondly, EPO offers straightforward implementation. In the EPO, the emperor penguins represent the candidate solution, huddle denotes the search space that comprises a two-dimensional L-shape polygon plane, and randomly positioned of the emperor penguins represents the feasible solution. Among all the emperor penguins, the focus is to locate an effective mover representing the global optimal solution. To-date, EPO has slowly gaining considerable momentum owing to its successful adoption in many broad range of optimization problems, that is, from medical data classification, economic load dispatch problem, engineering design problems, face recognition, multilevel thresholding for color image segmentation, high-dimensional biomedical data analysis for microarray cancer classification, automatic feature selection, event recognition and summarization, smart grid system, and traffic management system to name a few. Reflecting on recent progress, this paper thoroughly presents an in-depth study related to the current EPO's adoption in the scientific literature. In addition to highlighting new potential areas for improvements (and omission), the finding of this study can serve as guidelines for researchers and practitioners to improve the current state-of-the-arts and state-of-practices on general adoption of EPO while highlighting its new emerging areas of applications.
  •  
10.
  • Nasser, Abdullah B., et al. (author)
  • An Adaptive Opposition-based Learning Selection: The Case for Jaya Algorithm
  • 2021
  • In: IEEE Access. - 2169-3536 .- 2169-3536. ; 9, s. 55581-55594
  • Journal article (peer-reviewed)abstract
    • Over the years, opposition-based Learning (OBL) technique has been proven to effectively enhance the convergence of meta-heuristic algorithms. The fact that OBL is able to give alternative candidate solutions in one or more opposite directions ensures good exploration and exploitation of the search space. In the last decade, many OBL techniques have been established in the literature including the Standard-OBL, General-OBL, Quasi Reflection-OBL, Centre-OBL and Optimal-OBL. Although proven useful, much existing adoption of OBL into meta-heuristic algorithms has been based on a single technique. If the search space contains many peaks with potentially many local optima, relying on a single OBL technique may not be sufficiently effective. In fact, if the peaks are close together, relying on a single OBL technique may not be able to prevent entrapment in local optima. Addressing this issue, assembling a sequence of OBL techniques into meta-heuristic algorithm can be useful to enhance the overall search performance. Based on a simple penalized and reward mechanism, the best performing OBL is rewarded to continue its execution in the next cycle, whilst poor performing one will miss cease its current turn. This paper presents a new adaptive approach of integrating more than one OBL techniques into Jaya Algorithm, termed OBL-JA. Unlike other adoptions of OBL which use one type of OBL, OBL-JA uses several OBLs and their selections will be based on each individual performance. Experimental results using the combinatorial testing problems as case study demonstrate that OBL-JA shows very competitive results against the existing works in term of the test suite size. The results also show that OBL-JA performs better than standard Jaya Algorithm in most of the tested cases due to its ability to adapt its behaviour based on the current performance feedback of the search process.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 17

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view