SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Lindström Birgitta) "

Search: WFRF:(Lindström Birgitta)

  • Result 1-50 of 89
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Atif, Yacine, 1967-, et al. (author)
  • Cyber-threat analysis for Cyber-Physical Systems : Technical report for Package 4, Activity 3 of ELVIRA project
  • 2018
  • Reports (other academic/artistic)abstract
    • Smart grid employs ICT infrastructure and network connectivity to optimize efficiency and deliver new functionalities. This evolu- tion is associated with an increased risk for cybersecurity threats that may hamper smart grid operations. Power utility providers need tools for assessing risk of prevailing cyberthreats over ICT infrastructures. The need for frameworks to guide the develop- ment of these tools is essential to define and reveal vulnerability analysis indicators. We propose a data-driven approach for design- ing testbeds to evaluate the vulnerability of cyberphysical systems against cyberthreats. The proposed framework uses data reported from multiple components of cyberphysical system architecture layers, including physical, control, and cyber layers. At the phys- ical layer, we consider component inventory and related physi- cal flows. At the control level, we consider control data, such as SCADA data flows in industrial and critical infrastructure control systems. Finally, at the cyber layer level, we consider existing secu- rity and monitoring data from cyber-incident event management tools, which are increasingly embedded into the control fabrics of cyberphysical systems.
  •  
2.
  • Ivarsson, Lina Birgitta, et al. (author)
  • Treatment of Urethral Pain Syndrome (UPS) in Sweden
  • 2019
  • In: PLOS ONE. - : Public Library of Science (PLoS). - 1932-6203. ; 14:11
  • Journal article (peer-reviewed)abstract
    • BACKGROUND: Urethral Pain Syndrome (UPS) in women is a recurrent urethral pain without any proven infection or other obvious pathology. There are few studies on UPS, and evidence-based treatment is lacking. The primary aim was to study what treatments are used, and to compare the treatment tradition of UPS in Sweden in 2018, with what was used in 2006.METHODS: A questionnaire on the treatment of women with UPS was sent to all public gynecology, urology, gynecologic oncology and venereology clinics, and one public general practice in each county in Sweden in 2018. Private practice clinics in gynecology responded to the survey in 2017. Comparisons were made with the same survey sent to gynecology and urology clinics in 2006.FINDINGS: Of 137 invited clinics in 2018, 99 (72.3%) responded to the survey. Seventy-seven (77.8%) of them saw women with UPS and 79.2% (61/77) of these clinics treated the patients using 19 different treatment methods. Local corticosteroids and local estrogens were the methods most used. Treatments were similar in gynecology and urology clinics in 2006 and 2018, although strong corticosteroids had increased in use in the treatment regimens of 2018. More than half of the clinics used antibiotics.INTERPRETATION: Since there is no evidence-based treatment of UPS, a wide spectrum of treatments is used, and different specialties use different treatment strategies. Despite the lack of proven infection, a large number of clinics also treated the syndrome with antibiotics. There is thus a need for well-designed randomized controlled clinical trials to find evidence-based treatments of UPS.
  •  
3.
  • Atif, Yacine, 1967-, et al. (author)
  • A fuzzy logic approach to influence maximization in social networks
  • 2020
  • In: Journal of Ambient Intelligence and Humanized Computing. - : Springer. - 1868-5137 .- 1868-5145. ; 11:6, s. 2435-2451
  • Journal article (peer-reviewed)abstract
    • Within a community, social relationships are paramount to profile individuals’ conduct. For instance, an individual within a social network might be compelled to embrace a behaviour that his/her companion has recently adopted. Such social attitude is labelled social influence, which assesses the extent by which an individual’s social neighbourhood adopt that individual’s behaviour. We suggest an original approach to influence maximization using a fuzzy-logic based model, which combines influence-weights associated with historical logs of the social network users, and their favourable location in the network. Our approach uses a two-phases process to maximise influence diffusion. First, we harness the complexity of the problem by partitioning the network into significantly-enriched community-structures, which we then use as modules to locate the most influential nodes across the entire network. These key users are determined relatively to a fuzzy-logic based technique that identifies the most influential users, out of which the seed-set candidates to diffuse a behaviour or an innovation are extracted following the allocated budget for the influence campaign. This way to deal with influence propagation in social networks, is different from previous models, which do not compare structural and behavioural attributes among members of the network. The performance results show the validity of the proposed partitioning-approach of a social network into communities, and its contribution to “activate” a higher number of nodes overall. Our experimental study involves both empirical and real contemporary social-networks, whereby a smaller seed set of key users, is shown to scale influence to the high-end compared to some renowned techniques, which employ a larger seed set of key users and yet they influence less nodes in the social network.
  •  
4.
  • Atif, Yacine, 1967-, et al. (author)
  • Cyber-Threat Intelligence Architecture for Smart-Grid Critical Infrastructures Protection
  • 2017
  • Conference paper (peer-reviewed)abstract
    • Critical infrastructures (CIs) are becoming increasingly sophisticated with embedded cyber-physical systems (CPSs) that provide managerial automation and autonomic controls. Yet these advances expose CI components to new cyber-threats, leading to a chain of dysfunctionalities with catastrophic socio-economical implications. We propose a comprehensive architectural model to support the development of incident management tools that provide situation-awareness and cyber-threats intelligence for CI protection, with a special focus on smart-grid CI. The goal is to unleash forensic data from CPS-based CIs to perform some predictive analytics. In doing so, we use some AI (Artificial Intelligence) paradigms for both data collection, threat detection, and cascade-effects prediction. 
  •  
5.
  • Atif, Yacine, 1967-, et al. (author)
  • Multi-agent Systems for Power Grid Monitoring : Technical report for Package 4.1 of ELVIRA project
  • 2018
  • Reports (other academic/artistic)abstract
    • This document reports a technical description of ELVIRA project results obtained as part of Work- package 4.1 entitled “Multi-agent systems for power Grid monitoring”. ELVIRA project is a collaboration between researchers in School of IT at University of Skövde and Combitech Technical Consulting Company in Sweden, with the aim to design, develop and test a testbed simulator for critical infrastructures cybersecurity. This report outlines intelligent approaches that continuously analyze data flows generated by Supervisory Control And Data Acquisition (SCADA) systems, which monitor contemporary power grid infrastructures. However, cybersecurity threats and security mechanisms cannot be analyzed and tested on actual systems, and thus testbed simulators are necessary to assess vulnerabilities and evaluate the infrastructure resilience against cyberattacks. This report suggests an agent-based model to simulate SCADA- like cyber-components behaviour when facing cyber-infection in order to experiment and test intelligent mitigation mechanisms. 
  •  
6.
  •  
7.
  • Brain, Cecilia, 1969, et al. (author)
  • Drug attitude and other predictors of medication adherence in schizophrenia : 12 months of electronic monitoring (MEMS (R)) in the Swedish COAST-study
  • 2013
  • In: European Neuropsychopharmacology. - : Elsevier BV. - 0924-977X .- 1873-7862. ; 23:12, s. 1754-1762
  • Journal article (peer-reviewed)abstract
    • The aim was to investigate clinical predictors of adherence to antipsychotics. Medication use was electronically monitored with a Medication Event Monitoring System (MEMS (R)) for 12 months in 112 outpatients with schizophrenia and schizophrenia-like psychosis according to DSM-IV. Symptom burden, insight, psychosocial function (PSP) and side effects were rated at baseline. A comprehensive neuropsychological test battery was administered and a global composite score was calculated. The Drug Attitude Inventory (DAI-10) was filled in. A slightly modified DAI-10 version for informants was distributed as a postal questionnaire. Nonadherence (MEMS (R) adherence <= 0.80) was observed in 27%. In univariate regression models low scores on DAI-10 and DAI-10 informant, higher positive symptom burden, poor function, psychiatric side effects and lack of insight predicted non-adherence. No association was observed with global cognitive function. In multivariate regression models, low patient-rated DAI-10 and PSP scores emerged as predictors of non-adherence. A ROC analysis showed that DAI-10 had a moderate ability to correctly identify non-adherent patients (AUC=0.73, p<0.001). At the most "optimal" cut-off of 4, one-third of the adherent would falsely be. identified as non-adherent. A somewhat larger AUC (0.78, p<0.001) was observed when the ROC procedure was applied to the final regression model including DAI-10 and PSP. For the subgroup with informant data, the AUC for the DAI-10 informant version was 0.68 (p=0.021). Non-adherence cannot be properly predicted in the clinical setting on the basis of these instruments alone. The DAI-10 informant questionnaire needs further testing.
  •  
8.
  • Brain, Cecilia, 1969, et al. (author)
  • Stigma, discrimination and medication adherence in schizophrenia: Results from the Swedish COAST study
  • 2014
  • In: Psychiatry Research. - : Elsevier BV. - 0165-1781 .- 1872-7123. ; 2014:220(3), s. 811-817
  • Journal article (peer-reviewed)abstract
    • The aims of thisn aturalistic non-interventional study were to quantify the level of stigma and discrimination in persons with schizophrenia and to test for potential associations between different types of stigma and adherence to antipsychotics. Antipsychotic medication use was electronically monitored with a Medication Event Monitoring System (MEMS) for 12 months in 111 outpatients with schizophrenia and schizophrenia-like psychosis (DSM-IV). Stigma was assessed at endpoint using the Discrimination and Stigma Scale (DISC). Single DISC items that were most frequently reported included social relationships in making /keeping friends (71%) and in the neighborhood (69%). About half of the patients experienced discrimination by their families, in intimate relationships, regarding employment and by mental health staff. Most patients (88%) wanted to conceal their mental health problems from others; 70% stated that anticipated discrimination resulted in avoidance of close personal relationships. Non-adherence (MEMS adherencer 0.80) was observed in 30 (27.3%). When DISC subscale scores (SD) were entered in separate regression models, neither experienced nor anticipated stigma was associated with adherence. Our data do not support an association between stigma and non-adherence. Further studies in other settings are needed as experiences of stigma and levels of adherence and their potential associations might vary by health care system or cultural and sociodemographic contexts.
  •  
9.
  • Brain, Cecilia, 1969, et al. (author)
  • Twelve months of electronic monitoring (MEMS®) in the Swedish COAST-study : a comparison of methods for the measurement of adherence in schizophrenia
  • 2014
  • In: European Neuropsychopharmacology. - : Elsevier BV. - 1873-7862 .- 0924-977X. ; 24:2, s. 22-215
  • Journal article (peer-reviewed)abstract
    • The primary aim was to compare objective and subjective measures of adherence in a naturalistic cohort of schizophrenia outpatients over 12 months between October 2008 and June 2011. Antipsychotic medication adherence was monitored in 117 outpatients diagnosed with schizophrenia or schizophrenia-like psychosis according to DSM-IV criteria in a naturalistic prospective study. Adherence was determined by the Medication Event Monitoring System (MEMS®), pill count, plasma levels and patient, staff, psychiatrist and close informant ratings. The plasma level adherence measure reflects adherence to medication and to lab visits. Relationships between MEMS® adherence and other measures were expressed as a concordance index and kappa (K). Non-adherence (MEMS® ≤0.80) was observed in 27% of the patients. MEMS® adherence was highly correlated with pill count (concordance= 89% and K=0.72, p<0.001). Concordance and K were lower for all other adherence measures and very low for the relationship between MEMS® adherence and plasma levels (concordance=56% and K=0.05, p=0.217). Adherence measures were also entered into a principal component analysis that yielded three components. MEMS® recordings, pill count and informant ratings had their highest loadings in the first component, plasma levels alone in the second and patient, psychiatrist and staff ratings in the third. The strong agreement between MEMS® and pill count suggests that structured pill count might be a useful tool to follow adherence in clinical practice. The large discrepancy between MEMS® and the adherence measure based on plasma levels needs further study in clinical settings.
  •  
10.
  • Christiansen, Mats, 1972- (author)
  • Patient experiences and the influence on health literacy and self-care using mHealth to manage symptoms during radiotherapy for prostate cancer
  • 2019
  • Licentiate thesis (other academic/artistic)abstract
    • Introduction: Prostate cancer is a diagnosis that can affect the men’s quality of life both due to the symptoms related to the disease and the treatment the men receive. Treatment with radiotherapy for prostate cancer in Sweden takes place at outpatient clinics, where the patient visits daily for radiotherapy and then returns home. Most of the time the patient is experiencing the symptoms and side-effects at home without health-care professionals easily accessible. To facilitate person-centered care and improve clinical management when hospital care is moving to outpatient care, the app (Interaktor) for smartphones and tablets was developed. Using patient-reported outcomes (PRO), the app was intended to identify symptoms early, assess them in real time, and provide symptom-management support during radiotherapy for prostate cancer.   Aims: The overall objective of the intervention described in this thesis, was to facilitate symptom management for patients with prostate cancer assisted with an interactive app during radiotherapy treatment.Methods:  The two studies included in this thesis come from one trial. A descriptive investigation evaluated the intervention group’s use and perception of the using the app, and a quasi-experimental investigation compared those using the app with a historical control group not using the app to evaluate the effect on health literacy and self-care agency. The patients (n=130) were recruited consecutively from two university hospitals in Sweden between April 2012 and October 2013. The intervention group (n=66) had access to the app during 5-7 weeks of radiotherapy and three additional weeks. The intervention group’s use of the app was logged. Health literacy was measured  using the Swedish Functional Health Literacy Scale (FHL) and the Swedish Communicative and Critical Health Literacy Scale (CCHL), and the Appraisal of Self-care Agency scale, version A (patient’s assessment) (ASA-A) for self-care agency. Transcribed notes from phone or face-to-face interviews about participants’ experiences of using and reporting in the app were analyzed.Results: In the intervention group using the app, adherence to daily reports was 87% (Md 92%, 16-100%), and generated 3,536 reports. All listed symptoms were used, where the most common being: urinary urgency, fatigue, hot flushes, and difficulties in urinating. A total of 1,566 alerts were generated, with 1/3 being severe (red alert). The app was reported in the interviews as easy to use, the reporting became routine; to report facilitated reflection over symptoms, the symptoms were relevant although some found that nuancing severity was hard. Using the app was reported as providing a sense of security. Substantial portions of the participants showed inadequate FHL and CCHL at baseline for both groups. CCHL changed significantly for the intervention group from baseline to three months after ended treatment (p = 0.050). Functional health literacy and self-care agency did not reveal any statistically significant differences over time for either group. Conclusions: The conclusions to draw from this thesis are that an mHealth intervention, the app Interaktor, served as a supportive tool for the patients to assess and manage symptoms during the radiotherapy for prostate cancer. The intervention provided the patients with a sense of safety, increased awareness of own well-being and a significant improvement in communicative and critical health literacy was found. The portions of inadequate levels of health literacy reported leave substantial groups of patients more vulnerable in assessing and managing symptoms when treated with radiotherapy for prostate cancer. Although health literacy levels include notable portions of patients in this study that have inadequate levels of both functional and communicative and critical health literacy, the adherence of using the app was high.
  •  
11.
  • Ding, Jianguo, et al. (author)
  • CPS-based Threat Modeling for Critical Infrastructure Protection
  • 2017
  • In: Performance Evaluation Review. - : ACM Publications. - 0163-5999 .- 1557-9484. ; 45:2, s. 129-132
  • Journal article (peer-reviewed)abstract
    • Cyber-Physical Systems (CPSs) are augmenting traditionalCritical Infrastructures (CIs) with data-rich operations. Thisintegration creates complex interdependencies that exposeCIs and their components to new threats. A systematicapproach to threat modeling is necessary to assess CIs’ vulnerabilityto cyber, physical, or social attacks. We suggest anew threat modeling approach to systematically synthesizeknowledge about the safety management of complex CIs andsituational awareness that helps understanding the nature ofa threat and its potential cascading-effects implications.
  •  
12.
  • Ding, Jianguo, et al. (author)
  • Towards Threat Modeling for CPS-based Critical Infrastructure Protection
  • 2015
  • In: Proceedings of the International Emergency Management Society (TIEMS), 22nd TIEMS Annual Conference. - Brussels : TIEMS, The International Emergency Management Society. - 9789490297138
  • Conference paper (peer-reviewed)abstract
    • With the evolution of modern Critical Infrastructures (CI), more Cyber-Physical systems are integrated into the traditional CIs. This makes the CIs a multidimensional complex system, which is characterized by integrating cyber-physical systems into CI sectors (e.g., transportation, energy or food & agriculture). This integration creates complex interdependencies and dynamics among the system and its components. We suggest using a model with a multi-dimensional operational specification to allow detection of operational threats. Embedded (and distributed) information systems are critical parts of the CI where disruption can lead to serious consequences. Embedded information system protection is therefore crucial. As there are many different stakeholders of a CI, comprehensive protection must be viewed as a cross-sector activity to identify and monitor the critical elements, evaluate and determine the threat, and eliminate potential vulnerabilities in the CI. A systematic approach to threat modeling is necessary to support the CI threat and vulnerability assessment. We suggest a Threat Graph Model (TGM) to systematically model the complex CIs. Such modeling is expected to help the understanding of the nature of a threat and its impact on throughout the system. In order to handle threat cascading, the model must capture local vulnerabilities as well as how a threat might propagate to other components. The model can be used for improving the resilience of the CI by encouraging a design that enhances the system's ability to predict threats and mitigate their damages. This paper surveys and investigates the various threats and current approaches to threat modeling of CI. We suggest integrating both a vulnerability model and an attack model, and we incorporate the interdependencies within CI cross CI sectors. Finally, we present a multi-dimensional threat modeling approach for critical infrastructure protection.
  •  
13.
  • Eriksson, Anders, et al. (author)
  • Model transformation impact on test artifacts : An empirical study
  • 2012
  • In: Proceedings of the Workshop on Model-Driven Engineering, Verification and Validation, MoDeVVa 2012. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450318013 ; , s. 5-10
  • Conference paper (peer-reviewed)abstract
    • Development environments that support Model-Driven Development often focus on model-level functional testing, enabling verification of design models against their specifications. However, developers of safety-critical software systems are also required to show that tests cover the structure of the implementation. Unfortunately, the implementation structure can diverge from the model depending on choices such as the model compiler or target language. Therefore, structural coverage at the model level may not guarantee coverage of the implementation. We present results from an industrial experiment that demonstrates the model-compiler effect on test artifacts in xtUML models when these models are transformed into C++. Test artifacts, i.e., predicates and clauses, are used to satisfy the structural code coverage criterion, in this case MCDC, which is required by the US Federal Aviation Administration. The results of the experiment show not only that the implementation contains more test artifacts than the model, but also that the test artifacts can be deterministically enumerated during translation. The analysis identifies two major sources for these additional test artifacts. © 2012 ACM.
  •  
14.
  • Eriksson, Anders, et al. (author)
  • Transformation rules for platform independent testing : An empirical study
  • 2013
  • In: Proceedings of the Sixth IEEE International Conference on Software Testing, Verification and Validation, ICST 2013. - : IEEE conference proceedings. - 9781467359610 - 9780769549682 ; , s. 202-211
  • Conference paper (peer-reviewed)abstract
    • Most Model-Driven Development projects focus on model-level functional testing. However, our recent study found an average of 67% additional logic-based test requirements from the code compared to the design model. The fact that full coverage at the design model level does not guarantee full coverage at the code level indicates that there are semantic behaviors in the model that model-based tests might miss, e.g., conditional behaviors that are not explicitly expressed as predicates and therefore not tested by logic-based coverage criteria. Avionics standards require that the structure of safety critical software is covered according to logic-based coverage criteria, including MCDC for the highest safety level. However, the standards also require that each test must be derived from the requirements. This combination makes designing tests hard, time-consuming and expensive to design. This paper defines a new model that uses transformation rules to help testers define tests at the platform independent model level. The transformation rules have been applied to six large avionic applications. The results show that the new model reduced the difference between model and code with respect to the number of additional test requirements from an average of67% to 0% in most cases and less than 1% for all applications. © 2013 IEEE.
  •  
15.
  • Eriksson, Anders, et al. (author)
  • UML Associations : Reducing the gap in test coverage between model and code
  • 2016
  • In: Proceedings of the 4th International Conference on Model-Driven Engineering and Software Development. - : SciTePress. - 9789897581687 ; , s. 589-599
  • Conference paper (peer-reviewed)abstract
    • This paper addresses the overall problem of estimating the quality of a test suite when testing is performed at aplatform-independent level, using executable UML models. The problem is that the test suite is often requiredto fulfill structural code coverage criteria. In the avionics domain it is usually required that the tests achieve100% coverage according to logic-based coverage criteria. Such criteria are less effective when applied toexecutable UML models than when they are applied to code because the action code found in such modelscontains conditions in navigation and loops that are not explicit and therefore not captured by logic-basedcoverage criteria. We present two new coverage criteria for executable UML models, and we use an industrialapplication from the avionics domain to show that these two criteria should be combined with a logic-basedcriterion when testing the executable UML model. As long as the coverage is less than 100% at the modellevel, there is no point in running the tests at the code level since all functionality of the model is not yet tested,and this is necessary to achieve 100% coverage at the code level.
  •  
16.
  • Franzén, Åke, et al. (author)
  • I Teglets Spår : Svedala igår, idag, imorgon
  • 1986
  • Reports (pop. science, debate, etc.)abstract
    • Denna skrift om Svedala omfattar tre delar. Den första delen "Svedala igår" är en historisk översikt som beskriver Svedalas tillväxt framför allt från mitten av 1800-till början av 1900-talet. Då skedde den omvälvande ut­vecklingen i samband med järnvägarnas tillkomst och in­dustrins etablering. Den andra delen "Svedala idag" beskriver det nutida Sve­dala, dess bebyggelse och miljö, allt från gatu- och torgmiljöer till husdetaljer. I hus och detaljanalyser visar vi det typiska för epokens byggande vilket också idag präglar stora delar av Svedalas byggnadsbestånd. Med Svedalas historia och bebyggelse som bakgrund disku­terar vi i en tredje del "Svedala imorgon" Svedalas fram tid. Vi formulerar ett antal enkla teser med råd och an­visningar på plan- respektive husnivå till stöd för Sve­dalas bebyggelsemiljö nu och i framtiden.
  •  
17.
  • Gavali, Hamid, et al. (author)
  • Outcome of Radical Surgical Treatment of Abdominal Aortic Graft and Endograft Infections Comparing Extra-anatomic Bypass with In Situ Reconstruction : A Nationwide Multicentre Study
  • 2021
  • In: European Journal of Vascular and Endovascular Surgery. - : Saunders Elsevier. - 1078-5884 .- 1532-2165. ; 62:6, s. 918-926
  • Journal article (peer-reviewed)abstract
    • Objective: Abdominal aortic graft and endograft infection (AGI) is primarily treated by resection of the infected graft and restoration of distal perfusion through extra-anatomic bypass (EAB) or in situ reconstruction/repair (ISR). The aim of this study was to compare these surgical strategies in a nationwide multicentre retrospective cohort study.Methods: The Swedish Vascular Registry (Swedvasc) was used to identify surgically treated abdominal AGIs in Sweden between January 1995 and May 2017. The primary aim was to compare short and long term survival, as well as complications for EAB and ISR.Results: Some 126 radically surgically treated AGI patients were identified – 102 graft infections and 24 endograft infections – treated by EAB: 71 and ISR: 55 (23 neo-aorto-iliac systems, NAISs). No differences in early 30 day (EAB 81.7% vs. ISR 76.4%, p =.46), or long term five year survival (48.2% vs. 49.9%, p =.87) were identified. There was no survival difference comparing NAIS to other ISR strategies. The frequency of recurrent graft infection during follow up was similar: EAB 20.3% vs. ISR 17.0% (p =.56). Survival and re-infection rates of the new conduit did not differ between NAIS and other ISR strategies. Age ≥ 75 years (odds ratio [OR] 4.0, confidence interval [CI] 1.1 – 14.8), coronary artery disease (OR 4.2, CI 1.2 – 15.1) and post-operative circulatory complications (OR 5.2, CI 1.2 – 22.5) were associated with early death. Prolonged antimicrobial therapy (> 3 months) was associated with reduced long term mortality (HR 0.3, CI 0.1 – 0.9).Conclusion: In this nationwide multicentre study comparing outcomes of radically treated AGI, no differences in survival or re-infection rate could be identified comparing EAB and ISR.
  •  
18.
  • Gavali, Hamid, et al. (author)
  • Semi-Conservative Treatment Versus Radical Surgery in Abdominal Aortic Graft and Endograft Infections
  • 2023
  • In: European Journal of Vascular and Endovascular Surgery. - : Elsevier. - 1078-5884 .- 1532-2165. ; 66:3, s. 397-406
  • Journal article (peer-reviewed)abstract
    • Objective: Abdominal aortic graft and endograft infections (AGIs) are rare complications following aortic surgery. Radical surgery (RS) with resection of the infected graft and reconstruction with extra-anatomical bypass or in situ reconstruction is the preferred therapy. For patients unfit for RS, a semi-conservative (SC), graft preserving strategy is possible. This paper aimed to compare survival and infection outcomes between RS and SC treatment for AGI in a nationwide cohort.Methods: Patients with abdominal AGI related surgery in Sweden between January 1995 and May 2017 were identified. The Management of Aortic Graft Infection Collaboration (MAGIC) criteria were used for the definition of AGI. Multivariable regression was performed to identify factors associated with mortality.Results: One hundred and sixty-nine patients with surgically treated abdominal AGI were identified, comprising 43 SC (14 endografts; 53% with a graft enteric fistula [GEF] in total) and 126 RS (26 endografts; 50% with a GEF in total). The SC cohort was older and had a higher frequency of cardiac comorbidities. There was a non-significant trend towards lower Kaplan -Meier estimated five year survival for SC vs. RS (30.2% vs. 48.4%; p = .066). A non-significant trend was identified towards worse Kaplan -Meier estimated five year survival for SC patients with a GEF vs. without a GEF (21.7% vs. 40.1%; p = .097). There were significantly more recurrent graft infections comparing SC with RS (45.4% vs. 19.3%; p < .001). In a Cox regression model adjusting for confounders, there was no difference in five year survival comparing SC vs. RS (HR 1.0, 95% CI 0.6 -1.5).Conclusion: In this national AGI cohort, there was no mortality difference comparing SC and RS for AGI when adjusting for comorbidities. Presence of GEF probably negatively impacts survival outcomes of SC patients. Rates of recurrent infection remain high for SC treated patients.
  •  
19.
  • González-Hernández, Loreto, et al. (author)
  • Using Mutant Stubbornness to Create Minimal and Prioritized Test Sets
  • 2018
  • In: 2018 IEEE International Conference on Software Quality, Reliability and Security (QRS). - : IEEE Computer Society. - 9781538677575 - 9781538677582 ; , s. 446-457
  • Conference paper (peer-reviewed)abstract
    • In testing, engineers want to run the most useful tests early (prioritization). When tests are run hundreds or thousands of times, minimizing a test set can result in significant savings (minimization). This paper proposes a new analysis technique to address both the minimal test set and the test case prioritization problems. This paper precisely defines the concept of mutant stubbornness, which is the basis for our analysis technique. We empirically compare our technique with other test case minimization and prioritization techniques in terms of the size of the minimized test sets and how quickly mutants are killed. We used seven C language subjects from the Siemens Repository, specifically the test sets and the killing matrices from a previous study. We used 30 different orders for each set and ran every technique 100 times over each set. Results show that our analysis technique performed significantly better than prior techniques for creating minimal test sets and was able to establish new bounds for all cases. Also, our analysis technique killed mutants as fast or faster than prior techniques. These results indicate that our mutant stubbornness technique constructs test sets that are both minimal in size, and prioritized effectively, as well or better than other techniques.
  •  
20.
  • Grindal, Mats, et al. (author)
  • An Evaluation of Combination Strategies for Test Case Selection
  • 2006
  • In: Empirical Software Engineering. - : Springer. - 1382-3256 .- 1573-7616. ; 11:4, s. 583-611
  • Journal article (peer-reviewed)abstract
    • This paper presents results from a comparative evaluation of five combination strategies. Combination strategies are test case selection methods that combine “interesting” values of the input parameters of a test subject to form test cases. This research comparatively evaluated five combination strategies; the All Combination strategy (AC), the Each Choice strategy (EC), the Base Choice strategy (BC), Orthogonal Arrays (OA) and the algorithm from the Automatic Efficient Test Generator (AETG). AC satisfies n-wise coverage, EC and BC satisfy 1-wise coverage, and OA and AETG satisfy pair-wise coverage. The All Combinations strategy was used as a “gold standard” strategy; it subsumes the others but is usually too expensive for practical use. The others were used in an experiment that used five programs seeded with 128 faults. The combination strategies were evaluated with respect to the number of test cases, the number of faults found, failure size, and number of decisions covered. The strategy that requires the least number of tests, Each Choice, found the smallest number of faults. Although the Base Choice strategy requires fewer test cases than Orthogonal Arrays and AETG, it found as many faults. Analysis also shows some properties of the combination strategies that appear significant. The two most important results are that the Each Choice strategy is unpredictable in terms of which faults will be revealed, possibly indicating that faults are found by chance, and that the Base Choice and the pair-wise combination strategies to some extent target different types of faults.
  •  
21.
  • Grindal, Mats, et al. (author)
  • An Evaluation of Combination Strategies for Test Case Selection
  • 2003
  • Reports (other academic/artistic)abstract
    • In this report we present the results from a comparative evaluation of five combination strategies. Combination strategies are test case selection methods that combine interesting values of the input parameters of a test object to form test cases. One of the investigated combination strategies, namely the Each Choice strategy, satisfies 1-wise coverage, i.e., each interesting value of each parameter is represented at least once in the test suite. Two of the strategies, the Orthogonal Arrays and Heuristic Pair-Wise strategies both satisfy pair-wise coverage, i.e., every possible pair of interesting values of any two parameters are included in the test suite. The fourth combination strategy, the All Values strategy, generates all possible combinations of the interesting values of the input parameters. The fifth and last combination strategy, the Base Choice combination strategy, satisfies 1-wise coverage but in addition makes use of some semantic information to construct the test cases. Except for the All Values strategy, which is only used as a reference point with respect to the number of test cases, the combination strategies are evaluated and compared with respect to number of test cases, number of faults found, test suite failure density, and achieved decision coverage in an experiment comprising five programs, similar to Unix commands, seeded with 131 faults. As expected, the Each Choice strategy finds the smallest number of faults among the evaluated combination strategies. Surprisingly, the Base Choice strategy performs as well, in terms of detecting faults, as the pair-wise combination strategies, despite fewer test cases. Since the programs and faults in our experiment may not be representative of actual testing problems in an industrial setting, we cannot draw any general conclusions regarding the number of faults detected by the evaluated combination strategies. However, our analysis shows some properties of the combination strategies that appear significant in spite of the programs and faults not being representative. The two most important results are that the Each Choice strategy is unpredictable in terms of which faults will be detected, i.e., most faults found are found by chance, and that the Base Choice and the pair-wise combination strategies to some extent target different types of faults.
  •  
22.
  •  
23.
  • Hassan, Mahdi Mohammad, 1977-, et al. (author)
  • Testability and Software Performance : A Systematic Mapping Study
  • 2016
  • In: SAC '16. - New York, NY : Association for Computing Machinery (ACM). - 9781450337397 ; , s. 1566-1569
  • Conference paper (peer-reviewed)abstract
    • In most of the research on software testability, functional correctness of the software has been the focus while the evidence regarding testability and non-functional properties such as performance is sporadic. The objective of this study is to present the current state-of-the-art related to issues of importance, types and domains of software under test, types of research, contribution types and design evaluation methods concerning testability and software performance. We find that observability, controllability and testing effort are the main testability issues while timeliness and response time (i.e., time constraints) are the main performance issues in focus. The primary studies in the area use diverse types of software under test within different domains, with realtime systems as being a dominant domain. The researchers have proposed many different methods in the area, however these methods lack implementation in practice.
  •  
24.
  • Hassan, Mahdi Mohammad, 1977-, et al. (author)
  • Testability and Software Robustness : A Systematic Literature Review
  • 2015
  • In: 2015 41st Euromicro Conference on Software Engineering and Advanced Applications. - Funchal, Madeira, Portugal : IEEE. - 9781467375856 ; , s. 341-348
  • Conference paper (peer-reviewed)abstract
    • The concept of software testability has been researched in several different dimensions, however the relation of this important concept with other quality attributes is a grey area where existing evidence is scattered. The objective of this study is to present a state-of-the-art with respect to issues of importance concerning software testability and an important quality attribute: software robustness. The objective is achieved by conducting a systematic literature review (SLR) on the topic. Our results show that a variety of testability issues are in focus with observability and controllability issues being most researched. Fault tolerance, exception handling and handling external influence are prominent robustness issues in focus.
  •  
25.
  • Holmgren, Eva, 1972- (author)
  • Getting up when falling down : reducing fall risk factors after stroke through an exercise program
  • 2010
  • Doctoral thesis (other academic/artistic)abstract
    • The purpose of this thesis was to identify fall risk individuals (+55) after stroke by validating a fall risk index and in post-stroke individuals with high risk of falls evaluate the impact of an intervention program on fall risk factors.A previously developed fall risk index was validated, modified and re-validated. The validation showed a sensitivity of 97% and a specificity of 26%. This result was not considered sufficiently accurate. Therefore a modified index was created in the Validation sample and re-validated back in the Model fit sample. The modified index was reduced to three items and included postural stability + visuospatial hemi-inattention + male sex.The randomized controlled trial contained an intervention program (IP) with High-Intensity Functional Exercises as well as implementation these exercises in to real life situations together with educational group discussions. The participants were enrolled and randomized three to six months after their stroke. The assessments were performed at the Clinical Research Center at Norrlands University Hospital. The Intervention Group (IG) received a program of 35 sessions (exercise and group discussions) and the Control Group (CG) received five group discussions.Performing daily activities at 6 months follow-up and falls-efficacy post-intervention and at the 3 months follow-up showed significant improvement in the IG compared with the CG (p<0.05). The IP did not have a statistically significant impact on Balance or Lifestyle activities. When evaluating gait, step time variability for the paretic leg and the variability in Cycle Time for the paretic and non-paretic leg were improved for the IG. The time spent on the non –paretic leg in the gait cycles’ most stable phase, Double Support, was reduced by almost half (0.9 sec to 0.4 sec) since baseline for the IG after the intervention and remained reduced to the three month follow-up. Quality of Life showed an improvement in the CG compared with the IG for the mental scales, Mental Component Scale and Mental Health subscale at the 3 month follow-up (p=.02).In conclusion, this intervention program significantly improved performance of everyday life activities, falls-efficacy and the variability in gait. These are three major fall risk factors and might in the long run have an impact on decreasing falls in persons that had a stroke.  
  •  
26.
  • Jiang, Yuning, 1993-, et al. (author)
  • Complex Dependencies Analysis : Technical Description of Complex Dependencies in Critical Infrastructures, i.e. Smart Grids. Work Package 2.1 of the ELVIRA Project
  • 2018
  • Reports (other academic/artistic)abstract
    • This document reports a technical description of ELVIRA project results obtained as part of Work-package 2.1 entitled “Complex Dependencies Analysis”. In this technical report, we review attempts in recent researches where connections are regarded as influencing factors to  IT systems monitoring critical infrastructure, based on which potential dependencies and resulting disturbances are identified and categorized. Each kind of dependence has been discussed based on our own entity based model. Among those dependencies, logical and functional connections have been analysed with more details on modelling and simulation techniques.
  •  
27.
  • Jiang, Yuning, 1993- (author)
  • Vulnerability Analysis for Critical Infrastructures
  • 2022
  • Doctoral thesis (other academic/artistic)abstract
    • The rapid advances in information and communication technology enable a shift from diverse systems empowered mainly by either hardware or software to cyber-physical systems (CPSs) that are driving Critical infrastructures (CIs), such as energy and manufacturing systems. However, alongside the expected enhancements in efficiency and reliability, the induced connectivity exposes these CIs to cyberattacks exemplified by Stuxnet and WannaCry ransomware cyber incidents. Therefore, the need to improve cybersecurity expectations of CIs through vulnerability assessments cannot be overstated. Yet, CI cybersecurity has intrinsic challenges due to the convergence of information technology (IT) and operational technology (OT) as well as the crosslayer dependencies that are inherent to CPS based CIs. Different IT and OT security terminologies also lead to ambiguities induced by knowledge gaps in CI cybersecurity. Moreover, current vulnerability-assessment processes in CIs are mostly subjective and human-centered. The imprecise nature of manual vulnerability assessment operations and the massive volume of data cause an unbearable burden for security analysts. Latest advances in machine-learning (ML) based cybersecurity solutions promise to shift such burden onto digital alternatives. Nevertheless, the heterogeneity, diversity and information gaps in existing vulnerability data repositories hamper accurate assessments anticipated by these ML-based approaches. Therefore, a comprehensive approach is envisioned in this thesis to unleash the power of ML advances while still involving human operators in assessing cybersecurity vulnerabilities within deployed CI networks.Specifically, this thesis proposes data-driven cybersecurity indicators to bridge vulnerability management gaps induced by ad-hoc and subjective auditing processes as well as to increase the level of automation in vulnerability analysis. The proposed methodology follows design science research principles to support the development and validation of scientifically-sound artifacts. More specifically, the proposed data-driven cybersecurity architecture orchestrates a range of modules that include: (i) a vulnerability data model that captures a variety of publicly accessible cybersecurity-related data sources; (ii) an ensemble-based ML pipeline method that self-adjusts to the best learning models for given cybersecurity tasks; and (iii) a knowledge taxonomy and its instantiated power grid and manufacturing models that capture CI common semantics of cyberphysical functional dependencies across CI networks in critical societal domains. This research contributes data-driven vulnerability analysis approaches that bridge the knowledge gaps among different security functions, such as vulnerability management through related reports analysis. This thesis also correlates vulnerability analysis findings to coordinate mitigation responses in complex CIs. More specifically, the vulnerability data model expands the vulnerability knowledge scope and curates meaningful contexts for vulnerability analysis processes. The proposed ML methods fill information gaps in vulnerability repositories using curated data while further streamlining vulnerability assessment processes. Moreover, the CI security taxonomy provides disciplined and coherent support to specify and group semanticallyrelated components and coordination mechanisms in order to harness the notorious complexity of CI networks such as those prevalent in power grids and manufacturing infrastructures. These approaches learn through interactive processes to proactively detect and analyze vulnerabilities while facilitating actionable insights for security actors to make informed decisions.
  •  
28.
  • Johansson, Birgitta, 1959-, et al. (author)
  • Bedömning av rehabiliteringsbehov
  • 2013. - 1
  • In: Rehabilitering vid cancersjukdom. - Stockholm : Natur och kultur. - 9789127131286 ; , s. 38-56
  • Book chapter (pop. science, debate, etc.)
  •  
29.
  •  
30.
  • Johansson, Inger, 1943-, et al. (author)
  • Balancing integrity vs. risk of falling - Nurses experiences of caring for elderly people with dementia in nursing homes
  • 2009
  • In: Journal of Research in Nursing. - : SAGE Publications. - 1744-9871 .- 1744-988X. ; 14:1, s. 61-73
  • Journal article (peer-reviewed)abstract
    • Dementia is recognized as being a major risk for falls that cause suffering and increase dependency for the individual. The purpose of this study was to explore registered nurses and nurse assistants experiences of caring for elderly people with dementia who are at risk of falling, and factors that contribute or reduce falls in this group. A phenomenographic design was chosen. Ten nurses and 18 nurse assistants with experience of fall events were strategically selected for a recorded interview. The informants were chosen from 10 nursing homes in Sweden and Norway. They were asked to describe a fall situation they had been involved in when caring for elderly people with dementia. The findings shed light on an ethical dilemma in the main category Balancing integrity and autonomy versus risk of falling � which was comprehensively related to two descriptive categories. The first one was Adjusting to the older person�s condition� with the concepts of forgetfulness, anxiety and confusion, ability to express oneself and understand, bodily build and function. The second category was Adjusting the care environment�, comprising these conceptions: the physical environment, the psychosocial environment, organization and human resources. Based on the staff�s perceived difficulties in preventing falls in elderly people with dementia, there is a need for additional support or professional supervision in their work to enhance possibilities for successful fall prevention.
  •  
31.
  • Johnsen, Andreas (author)
  • Architecture-Based Verification of Dependable Embedded Systems
  • 2013
  • Licentiate thesis (other academic/artistic)abstract
    • Quality assurance of dependable embedded systems is becoming increasingly difficult, as developers are required to build more complex systems on tighter budgets. As systems become more complex, system architects must make increasingly complex architecture design decisions. The process of making the architecture design decisions of an intended system is the very first, and the most significant, step of ensuring that the developed system will meet its requirements, including requirements on its ability to tolerate faults. Since the decisions play a key role in the design of a dependable embedded system, they have a comprehensive effect on the development process and the largest impact on the developed system. Any faulty architecture design decision will, consequently, propagate throughout the development process, and is likely to lead to a system not meeting the requirements, an unacceptable level of dependability and costly corrections.Architecture design decisions are in turn critical with respect to quality and dependability of a system, and the cost of the development process. It is therefore crucial to prevent faulty architecture design decisions and, as early as practicable, detect and remove faulty decisions that have not successfully been prevented. The use of Architecture Description Languages (ADLs) helps developers to cope with the increasing complexity by formal and standardized means of communication and understanding. Furthermore, the availability of a formal description enables automated and formal analysis of the architecture design.The contribution of this licentiate thesis is an architecture quality assurance framework for safety-critical, performance-critical and mission-critical embedded systems specified by the Architecture Analysis and Design Language (AADL). The framework is developed through the adaption of formal methods, in particular traditional model checking and model-based testing techniques, to AADL, by defining formal verification criteria for AADL, and a formal AADL-semantics. Model checking of AADL models provides evidence of the completeness, consistency and correctness of the model, and allows for automated avoidance of faulty architecture design decisions, costly corrections and threats to quality and dependability. In addition, the framework can automatically generate test suites from AADL models to test a developed system with respect to the architecture design decisions. A successful test suite execution provides evidence that the architecture design has been implemented correctly. Methods for selective regression verification are included in the framework to cost-efficiently re-verify a modified architecture design, such as after a correction of a faulty design decision. 
  •  
32.
  • Johnson, Randi K., et al. (author)
  • Metabolite-related dietary patterns and the development of islet autoimmunity
  • 2019
  • In: Scientific Reports. - : Springer Science and Business Media LLC. - 2045-2322. ; 9:1
  • Journal article (peer-reviewed)abstract
    • The role of diet in type 1 diabetes development is poorly understood. Metabolites, which reflect dietary response, may help elucidate this role. We explored metabolomics and lipidomics differences between 352 cases of islet autoimmunity (IA) and controls in the TEDDY (The Environmental Determinants of Diabetes in the Young) study. We created dietary patterns reflecting pre-IA metabolite differences between groups and examined their association with IA. Secondary outcomes included IA cases positive for multiple autoantibodies (mAb+). The association of 853 plasma metabolites with outcomes was tested at seroconversion to IA, just prior to seroconversion, and during infancy. Key compounds in enriched metabolite sets were used to create dietary patterns reflecting metabolite composition, which were then tested for association with outcomes in the nested case-control subset and the full TEDDY cohort. Unsaturated phosphatidylcholines, sphingomyelins, phosphatidylethanolamines, glucosylceramides, and phospholipid ethers in infancy were inversely associated with mAb+ risk, while dicarboxylic acids were associated with an increased risk. An infancy dietary pattern representing higher levels of unsaturated phosphatidylcholines and phospholipid ethers, and lower sphingomyelins was protective for mAb+ in the nested case-control study only. Characterization of this high-risk infant metabolomics profile may help shape the future of early diagnosis or prevention efforts. © 2019, The Author(s).
  •  
33.
  • Krischer, Jeffrey P, et al. (author)
  • Predicting Islet Cell Autoimmunity and Type 1 Diabetes : An 8-Year TEDDY Study Progress Report
  • 2019
  • In: Diabetes Care. - : American Diabetes Association. - 1935-5548 .- 0149-5992. ; 42:6, s. 1051-1060
  • Journal article (peer-reviewed)abstract
    • OBJECTIVE: Assessment of the predictive power of The Environmental Determinants of Diabetes in the Young (TEDDY)-identified risk factors for islet autoimmunity (IA), the type of autoantibody appearing first, and type 1 diabetes (T1D).RESEARCH DESIGN AND METHODS: A total of 7,777 children were followed from birth to a median of 9.1 years of age for the development of islet autoantibodies and progression to T1D. Time-dependent sensitivity, specificity, and receiver operating characteristic (ROC) curves were calculated to provide estimates of their individual and collective ability to predict IA and T1D.RESULTS: HLA genotype (DR3/4 vs. others) was the best predictor for IA (Youden's index J = 0.117) and single nucleotide polymorphism rs2476601, in PTPN22, was the best predictor for insulin autoantibodies (IAA) appearing first (IAA-first) (J = 0.123). For GAD autoantibodies (GADA)-first, weight at 1 year was the best predictor (J = 0.114). In a multivariate model, the area under the ROC curve (AUC) was 0.678 (95% CI 0.655, 0.701), 0.707 (95% CI 0.676, 0.739), and 0.686 (95% CI 0.651, 0.722) for IA, IAA-first, and GADA-first, respectively, at 6 years. The AUC of the prediction model for T1D at 3 years after the appearance of multiple autoantibodies reached 0.706 (95% CI 0.649, 0.762).CONCLUSIONS: Prediction modeling statistics are valuable tools, when applied in a time-until-event setting, to evaluate the ability of risk factors to discriminate between those who will and those who will not get disease. Although significantly associated with IA and T1D, the TEDDY risk factors individually contribute little to prediction. However, in combination, these factors increased IA and T1D prediction substantially.
  •  
34.
  • Larsson, Birgitta, et al. (author)
  • Extracts of ECL-cell granules/vesicles and of isolated ECL cells from rat oxyntic mucosa evoke a Ca2+ second messenger response in osteoblastic cells
  • 2001
  • In: Regulatory Peptides. ; 97:2-3, s. 153-161
  • Journal article (peer-reviewed)abstract
    • Surgical removal of the acid-producing part of the stomach (oxyntic mucosa) reduces bone mass through mechanisms not yet fully understood. The existence of an osteotropic hormone produced by the so-called ECL cells has been suggested. These cells, which are numerous in the oxyntic mucosa, operate under the control of circulating gastrin. Both gastrin and an extract of the oxyntic mucosa decrease blood calcium and stimulate Ca2+ uptake into bone. Conceivably, gastrin lowers blood calcium indirectly by releasing a hypothetical hormone from the ECL cells. The present study investigated, by means of fura-2 fluorometry, the effect of extracts of preparations enriched in ECL cell granules/vesicles from rat oxyntic mucosa on mobilization of intracellular Ca2+ in three osteoblast-like cell lines, UMR-106.01, MC3T3-E1 and Saos-2, and of extracts of isolated ECL cells in UMR-106.01 cells. The extracts were found to induce a dose-related rapid increase in intracellular Ca2+ concentrations in the osteoblast-like cells. The response was not due to histamine or pancreastatin, known ECL cell constituents, and could be abolished by pre-digesting the extracts with exo-aminopeptidase. The results show that the increase in [Ca2+](i) reflects a mobilization of Ca2+ from the endoplasmic reticulum. The observation of an increase in [Ca2+](i) also in murine embryonic fibroblasts show that the response is not limited to osteoblastic cells. The finding that the extracts evoked a typical Ca2+ -mediated second messenger response in osteoblastic cells provides evidence for the existence of a novel osteotropic peptide hormone (gastrocalcin), produced in the ECL cells, and supports the view that gastrectomy-induced osteopathy may reflect a lack of this hormone.
  •  
35.
  • Lindström, Anders, 1964-, et al. (author)
  • User-centered development of a driving simulator for training of emergency vehicle drivers and development of Emergency Vehicle Approaching messaging : a simulator study
  • 2020
  • In: Proceedings of 8th Transport Research Arena TRA 2020.
  • Conference paper (peer-reviewed)abstract
    • There is a large risk of accidents in connection with emergency driving and the need for both better possibilities to train emergency vehicle driving and for systems guiding other road users to the right behavior is apparent. The aim of this study was (1) to initiate user-centered development of a driving simulator for training of emergency vehicle drivers and (2) to collect information about how to best communicate Emergency Vehicle Approaching (EVA) messages. The method used is user-involved, iterative development of both the driving scenario and the driving simulator. 104 participants have tried the simulator and responded to a questionnaire. Most difficult for emergency vehicle drivers are vehicles in front suddenly braking and failure in other drivers noticing them. Desired behaviour in other road users is to yield to the right and brake smoothly. The attitude towards communication of Emergency Vehicle (EV) driving is positive, regarding both pre-alerting drivers who are approaching an incident scene and sending out EVA-messages.
  •  
36.
  • Lindström, Birgitta, et al. (author)
  • Generating Trace-Sets for Model-Based Testing
  • 2007
  • In: Proceedings 18th IEEE International Symposiumon Software Reliability Engineering. - : IEEE. - 9780769530246 - 0769530249 ; , s. 171-180
  • Conference paper (peer-reviewed)abstract
    • Model-checkers are powerful tools that can find individual traces through models to satisfy desired properties. These traces provide solutions to a number of problems. Instead of individual traces, software testing needs sets of traces that satisfy coverage criteria. Finding a trace set in a large model is difficult because model checkers generate single traces and use a lot of memory. Space and time requirements of modelchecking algorithms grow exponentially with respect to the number of variables and parallel automata of the model being analyzed. We present a method that generates a set of traces by iteratively invoking a model checker. The method mitigates the memory consumption problem by dynamically building partitions along the traces. This method was applied to a testability case study, and it generated the complete trace set, while ordinary model-checking could only generate 26%.
  •  
37.
  • Lindström, Birgitta, et al. (author)
  • Generating Trace-Sets for Model-based Testing
  • 2007
  • In: Proceedings of the 18th IEEE International Symposium on Software Reliability. - 9780769530246 ; , s. 171-180
  • Conference paper (peer-reviewed)abstract
    • Model-checkers are powerful tools that can find individual traces through models to satisfy desired properties. These traces provide solutions to a number of problems. Instead of individual traces, software testing needs sets of traces that satisfy coverage criteria. Finding a trace set in a large model is difficult because model checkers generate single traces and use a lot of memory. Space and time requirements of model-checking algorithms grow exponentially with respect to the number of variables and parallel automata of the model being analyzed. We present a method that generates a set of traces by iteratively invoking a model checker. The method mitigates the memory consumption problem by dynamically building partitions along the traces. This method was applied to a testability case study, and it generated the complete trace set, while ordinary model-checking could only generate 26%. 
  •  
38.
  • Lindström, Birgitta, et al. (author)
  • Generating Trace-Sets for Model-based Testing
  • 2007
  • In: The 18th IEEE International Symposium on Software Reliability (ISSRE '07). ; , s. 171-180
  • Conference paper (pop. science, debate, etc.)abstract
    • Model-checkers are powerful tools that can find individual traces through models to satisfy desired properties. These traces provide solutions to a number of problems. Instead of individual traces, software testing needs sets of traces that satisfy coverage criteria. Finding a trace set in a large model is difficult because model checkers generate single traces and use a lot of memory. Space and time requirements of modelchecking algorithms grow exponentially with respect to the number of variables and parallel automata of the model being analyzed. We present a method that generates a set of traces by iteratively invoking a model checker. The method mitigates the memory consumption problem by dynamically building partitions along the traces. This method was applied to a testability case study, and it generated the complete trace set, while ordinary model-checking could only generate 26%.
  •  
39.
  • Lindström, Birgitta, et al. (author)
  • Identifying Useful Mutants to Test Time Properties
  • 2018
  • In: 2018 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW). - : IEEE Computer Society. - 9781538663523 - 9781538663530 ; , s. 69-76
  • Conference paper (peer-reviewed)abstract
    • Real-time systems have to be verified and tested for timely behavior as well as functional behavior. Thus, time is an extra dimension that adds to the complexity of software testing. A timed automata model with a model-checker can be used to generate timed test traces. To properly test the timely behavior, the set of test traces should challenge the different time constraints in the model. This paper describes and adapts mutation operators that target such time constraints in timed automata models. Time mutation operators apply a delta to the time constraints to help testers design tests that exceed the time constraints. We suggest that the size of this delta determines how easy the mutant is to kill and that the optimal delta varies by the program, mutation operator, and the individual mutant. To avoid trivial and equivalent time mutants, the delta should be set individually for each mutant. We discuss mutant subsumption and define the problem of finding dominator mutants in this new domain. In this position paper, we outline an iterative tuning process where a statistical model-checker, UPPAAL SMC, is used to: (i) create a tuned set of dominator time mutants, and (ii) generate test traces that kill the mutants.
  •  
40.
  • Lindström, Birgitta, et al. (author)
  • Message from the TestEd 2020 Chairs
  • 2020
  • In: 2020 IEEE 13th International Conference on Software Testing, Verification and Validation Workshops. - : IEEE. - 9781728110752 - 9781728110769
  • Conference paper (other academic/artistic)
  •  
41.
  • Lindström, Birgitta, et al. (author)
  • Model-Checking with Insufficient Memory Resources
  • 2006
  • Reports (other academic/artistic)abstract
    • Resource limitations is a major problem in model checking. Space and time requirements of model-checking algorithms grow exponentially with respect to the number of variables and parallel automata of the analyzed model. We present a method that is the result of experiences from a case study. It has enabled us to analyze models with much bigger state-spaces than what was possible without our method. The basic idea is to build partitions of the state-space of an analyzed system by iterative invocations of a model-checker. In each iteration the partitions are extended to represent a larger part of the state space, and if needed the partitions are further partitioned. Thereby the analysis problem is divided into a set of subproblems that can be analyzed independently of each other. We present how the method, implemented as a meta algorithm on-top of the Uppaal tool, has been applied in the case study.
  •  
42.
  • Lindström, Birgitta, et al. (author)
  • Mutating Aspect-Oriented Models to Test Cross-Cutting Concerns
  • 2015
  • In: 2015 IEEE 8th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2015 - Proceedings. - : IEEE conference proceedings. - 9781479918850 ; , s. Article number 7107456-
  • Conference paper (peer-reviewed)
  •  
43.
  • Lindström, Birgitta, et al. (author)
  • On redundant mutants and strong mutation
  • 2015
  • Reports (other academic/artistic)abstract
    • This study evaluates a theory of subsumption relations among mutants created by the ROR mutation operator, thus making most of these mutants redundant. A redundant mutant can be skipped during mutation analysis without decreasing the quality of the resulting test suite. This theory is interesting since mutation testing is computationally expensive and the theory states that the set of ROR mutants can be reduced by 57%. The reduced set of ROR mutants has therefore, been used in several recent studies. However, we provide proof that this theory do not hold for strong mutation and that part of the theory is incorrect. The theory itself has to our knowledge, never before been evaluated empirically.By finding counter examples, we prove that a test suite, which is 100% adequate for the non-redundant ROR mutants might not be 100% adequate for the mutants, which are supposed to be redundant. The subsumption relations do not hold for strong mutation. We have also proved that more than one top-level mutant can be detected by the same test. This should not be possible according to the theory. Hence, this part of the theory is incorrect, independent of strong or weak mutation.Our findings are important since strong mutation is frequently used to evaluate test suites and testing criteria. Just as redundant mutants can give an overestimation of the mutation score for a test suite, using the reduced set can give an underestimation. Results reported from such studies should therefore, be accompanied by information on whether the reduced or complete set of ROR was used and if the researchers used strong or weak mutation.
  •  
44.
  • Lindström, Birgitta, et al. (author)
  • On Strong Mutation and Subsuming Mutants
  • 2016
  • In: Proceedings. - : IEEE Computer Society. - 9781509018260 ; , s. 112-121
  • Conference paper (peer-reviewed)abstract
    • Mutation analysis is a powerful technique for software testing but it is also known to be computationally expensive.The main reason for the high computational cost is that many of themutants are redundant and thus, do not contribute to the quality of the test suite. One of the most promising approaches toavoid producing redundant mutants is to identify subsumption relations among mutants, preferably before these are generated.Such relations have for example, been identified at an operator level for mutants created by the ROR operator. This reduced set of non-redundant mutants hasbeen used in several recent studies and is also the default option in at least one mutation testing tool that supports strong mutation. This raises questions on whether the identified subsumption relations between the mutants hold in a context ofstrong mutation or variants of weak mutation that require some limited error propagation (firm mutation).We have conducted an experimental study to investigate the subsumption relations in the context of strong or firm mutation.We observed that it is possible to create a test suite that is 100\% adequate for the reduced set of mutants while not being 100\% adequate for the complete set. This shows that the subsumption relations do not hold for strong or firm mutation. We provide several examples on this behavior and discuss the root causes. Our findings are important since strong and firm mutation both are frequently used to evaluate test suites and testing criteria. The choice of whether to use a reduced set of mutants or an entire set should however, not be made without consideration of the context in which they are used (i.e., strong, firm or weak mutation) since the subsumption relations between ROR mutants do not hold for strong or firm mutation.Just as redundant mutants can give an overestimation of the mutation score for a test suite, using the reduced set of mutantscan give an underestimation if used together with strong or firm mutation. Results reported from such studies should therefore, be accompanied by information on whether the reduced or complete set of mutants was used and if the researchers used strong, firm or weak mutation.
  •  
45.
  • Lindström, Birgitta, et al. (author)
  • On strong mutation and the theory of subsuming logic‐based mutants
  • 2019
  • In: Software testing, verification & reliability. - : John Wiley & Sons. - 0960-0833 .- 1099-1689. ; 29:1-2 Special Issue: SI, s. 1-23
  • Journal article (peer-reviewed)abstract
    • Redundant mutants might cause problems when benchmarking since testing techniques can get high scores without detecting any nonredundant mutants. However, removing nonredundant mutants might cause similar problems. Subsumed mutants are per definition also redundant since no additional tests are required to detect them once all other mutants are detected. We focus on relational operator replacement (ROR) and conditional operator replacement mutants. Subsumption relations between ROR mutants are defined by fault hierarchies. The fault hierarchies are proven for weak mutation but have since they were published been used with strong mutation. We prove that ROR fault hierarchies do not hold for strong mutation and show why. We also show that the probability for a random test to experience the problem can be more than 30% and that 50% of the mutants might be affected in a real software system. Finally, we show that there is a similar problem with the theory on sufficient conditional operator replacement.
  •  
46.
  • Lindström, Birgitta, et al. (author)
  • Six Issues in Testing Event-Triggered Real-Time Systems
  • 2007
  • Reports (other academic/artistic)abstract
    • Verification of real-time systems is a complex task, with problems coming from issues like concurrency. A previous paper suggested dealing with these problems by using a time-triggered design, which gives good support both for testing and formal analysis. However, atime-triggered solution is not always feasible and an event-triggered design is needed. Event-triggered systems are far more difficult to test than time-triggered systems.This paper revisits previously identified testing problems from a new perspective and identifies additional problems for event-triggered systems. The paper also presents an approach to deal with these problems. The TETReS project assumes a model-driven developmentprocess. We combine research within three different fields: (i) transformation of rule sets between timed automata specifications and ECA rules with maintained semantics, (ii) increasing testability in event-triggered system, and (iii) development of test case generation methods for event-triggered systems.
  •  
47.
  • Lindström, Birgitta, et al. (author)
  • Testability of dynamic real-time systems
  • 2002
  • In: Proceedings of Eigth International Conference on Real-Time Computing Systems and Applications (RTCSA2002). ; , s. 93-97
  • Conference paper (peer-reviewed)
  •  
48.
  • Lindström, Birgitta, et al. (author)
  • Testability of Dynamic Real-Time Systems : An Empirical Study of Constrained Execution Environment Implications
  • 2008
  • In: Proceedings of the First International Conference on Software Testing, Verification and Validation. - Los Alamitos : IEEE Computer Society. - 9780769531274 - 076953127X ; , s. 112-120
  • Conference paper (peer-reviewed)abstract
    • Real-time systems must respond to events in a timely fashion; in hard real-time systems the penalty for a missed deadline is high. It is therefore necessary to design hard real-time systems so that the timing behavior of the tasks can be predicted. Static real-time systems have prior knowledge of the worst-case arrival patterns and resource usage. Therefore, a schedule can be calculated off-line and tasks can be guaranteed to have sufficient resources to complete (resource adequacy). Dynamic real-time systems, on the other hand, do not have such prior knowledge, and therefore must react to events when they occur. They also must adapt to changes in the urgencies of various tasks, and fairly allocate resources among the tasks. A disadvantage of static real-time systems is that a requirement on resource adequacy makes them expensive and often impractical. Dynamic realtime systems, on the other hand, have the disadvantage of being less predictable and therefore difficult to test. Hence, in dynamic systems, timeliness is hard to guarantee and reliability is often low. Using a constrained execution environment, we attempt to increase the testability of such systems. An initial step is to identify factors that affect testability. We present empirical results on how various factors in the execution environment impacts testability of real-time systems. The results show that some of the factors, previously identified as possibly impacting testability, do not have an impact, while others do.
  •  
49.
  • Lindström, Birgitta, 1958- (author)
  • Testability of Dynamic Real-Time Systems
  • 2009
  • Doctoral thesis (other academic/artistic)abstract
    • This dissertation concerns testability of event-triggered real-time systems. Real-time systems are known to be hard to test because they are required to function correct both with respect to what the system does and when it does it. An event-triggered real-time system is directly controlled by the events that occur in the environment, as opposed to a time-triggered system, which behavior with respect to when the system does something is constrained, and therefore more predictable. The focus in this dissertation is the behavior in the time domain and it is shown how testability is affected by some factors when the system is tested for timeliness.This dissertation presents a survey of research that focuses on software testability and testability of real-time systems. The survey motivates both the view of testability taken in this dissertation and the metric that is chosen to measure testability in an experiment. We define a method to generate sets of traces from a model by using a meta algorithm on top of a model checker. Defining such a method is a necessary step to perform the experiment. However, the trace sets generated by this method can also be used by test strategies that are based on orderings, for example execution orders.An experimental study is presented in detail. The experiment investigates how testability of an event-triggered real-time system is affected by some constraining properties of the execution environment. The experiment investigates the effect on testability from three different constraints regarding preemptions, observations and process instances. All of these constraints were claimed in previous work to be significant factors for the level of testability. Our results support the claim for the first two of the constraints while the third constraint shows no impact on the level of testability.Finally, this dissertation discusses the effect on the event-triggered semantics when the constraints are applied on the execution environment. The result from this discussion is that the first two constraints do not change the semantics while the third one does. This result indicates that a constraint on the number of process instances might be less useful for some event-triggered real-time systems.
  •  
50.
  • Lindström, Birgitta, et al. (author)
  • Using an Existing Suite of Test Objects : Experience from a Testing Experiment
  • 2004
  • In: ACM SIGSOFT Software Engineering Notes. - : Association for Computing Machinery (ACM). ; , s. 1-3
  • Conference paper (other academic/artistic)abstract
    • This workshop paper presents lessons learned from a recent experiment to compare several test strategies. The test strategies were compared in terms of the number of tests needed to satisfy them and in terms of faults found. The experimental design and conduct are discussed, and frank assessments of the decisions that were made are provided. The paper closes with a summary of the lessons that were learned.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-50 of 89
Type of publication
journal article (38)
conference paper (26)
reports (11)
doctoral thesis (7)
licentiate thesis (3)
research review (2)
show more...
artistic work (1)
other publication (1)
book chapter (1)
show less...
Type of content
peer-reviewed (58)
other academic/artistic (28)
pop. science, debate, etc. (3)
Author/Editor
Lindström, Birgitta (38)
Andler, Sten F. (17)
Offutt, Jeff (16)
Thorslund, Birgitta, ... (6)
Lindström, Eva (6)
Ding, Jianguo (6)
show more...
Glimelius, Bengt (5)
Atif, Yacine, 1967- (5)
Pettersson, Paul (5)
Johansson, Birgitta, ... (4)
Jiang, Yuning, 1993- (4)
Brax, Christoffer (4)
Jeusfeld, Manfred (4)
Grindal, Mats (4)
Thalén-Lindström, An ... (4)
Eriksson, Anders (3)
Sundmark, Daniel (3)
Saadatmand, Mehrdad (3)
Mannervik, Bengt (3)
Waern, Margda, 1955 (3)
Lernmark, Åke (3)
von Knorring, Lars (3)
Ramelius, Anita (3)
Ask, Maria (3)
Allerby, Katarina, 1 ... (3)
Sameby, Birgitta (3)
Brain, Cecilia, 1969 (3)
Lundgren, Markus (3)
Jonsdottir, Berglind (3)
Törn, Carina (3)
Elding Larsson, Hele ... (3)
Lindström, Anders, 1 ... (3)
Thalén Lindström, An ... (3)
Haglund, Daniel (3)
Lindström, Anders (3)
Widerlöv, Birgitta (3)
Burns, Tom (3)
Sjödin, Birgitta (3)
Salami, Falastin (3)
Bremer, Jenny (3)
Gard, Thomas (3)
Johansen, Fredrik (3)
Markan, Maria (3)
Månsson-Martinez, Ma ... (3)
Rahmati, Kobra (3)
Sjöberg, Birgitta (3)
Wallin, Anne (3)
Wimar, Åsa (3)
Ottosson, Karin (3)
Bennet, Rasmus (3)
show less...
University
University of Skövde (35)
Uppsala University (20)
Mälardalen University (12)
Lund University (11)
Umeå University (9)
RISE (8)
show more...
Karolinska Institutet (8)
Linköping University (6)
Blekinge Institute of Technology (6)
VTI - The Swedish National Road and Transport Research Institute (6)
University of Gothenburg (5)
Karlstad University (4)
Stockholm University (3)
Örebro University (3)
Mid Sweden University (2)
Linnaeus University (2)
Royal Institute of Technology (1)
University of Borås (1)
The Royal Institute of Art (1)
show less...
Language
English (80)
Swedish (6)
Undefined language (3)
Research subject (UKÄ/SCB)
Natural sciences (38)
Medical and Health Sciences (27)
Engineering and Technology (20)
Social Sciences (2)
Humanities (2)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view