SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Hughes John 1958) "

Search: WFRF:(Hughes John 1958)

  • Result 1-50 of 60
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Lozano, Rafael, et al. (author)
  • Measuring progress from 1990 to 2017 and projecting attainment to 2030 of the health-related Sustainable Development Goals for 195 countries and territories: a systematic analysis for the Global Burden of Disease Study 2017
  • 2018
  • In: The Lancet. - : Elsevier. - 1474-547X .- 0140-6736. ; 392:10159, s. 2091-2138
  • Journal article (peer-reviewed)abstract
    • Background: Efforts to establish the 2015 baseline and monitor early implementation of the UN Sustainable Development Goals (SDGs) highlight both great potential for and threats to improving health by 2030. To fully deliver on the SDG aim of “leaving no one behind”, it is increasingly important to examine the health-related SDGs beyond national-level estimates. As part of the Global Burden of Diseases, Injuries, and Risk Factors Study 2017 (GBD 2017), we measured progress on 41 of 52 health-related SDG indicators and estimated the health-related SDG index for 195 countries and territories for the period 1990–2017, projected indicators to 2030, and analysed global attainment. Methods: We measured progress on 41 health-related SDG indicators from 1990 to 2017, an increase of four indicators since GBD 2016 (new indicators were health worker density, sexual violence by non-intimate partners, population census status, and prevalence of physical and sexual violence [reported separately]). We also improved the measurement of several previously reported indicators. We constructed national-level estimates and, for a subset of health-related SDGs, examined indicator-level differences by sex and Socio-demographic Index (SDI) quintile. We also did subnational assessments of performance for selected countries. To construct the health-related SDG index, we transformed the value for each indicator on a scale of 0–100, with 0 as the 2·5th percentile and 100 as the 97·5th percentile of 1000 draws calculated from 1990 to 2030, and took the geometric mean of the scaled indicators by target. To generate projections through 2030, we used a forecasting framework that drew estimates from the broader GBD study and used weighted averages of indicator-specific and country-specific annualised rates of change from 1990 to 2017 to inform future estimates. We assessed attainment of indicators with defined targets in two ways: first, using mean values projected for 2030, and then using the probability of attainment in 2030 calculated from 1000 draws. We also did a global attainment analysis of the feasibility of attaining SDG targets on the basis of past trends. Using 2015 global averages of indicators with defined SDG targets, we calculated the global annualised rates of change required from 2015 to 2030 to meet these targets, and then identified in what percentiles the required global annualised rates of change fell in the distribution of country-level rates of change from 1990 to 2015. We took the mean of these global percentile values across indicators and applied the past rate of change at this mean global percentile to all health-related SDG indicators, irrespective of target definition, to estimate the equivalent 2030 global average value and percentage change from 2015 to 2030 for each indicator. Findings: The global median health-related SDG index in 2017 was 59·4 (IQR 35·4–67·3), ranging from a low of 11·6 (95% uncertainty interval 9·6–14·0) to a high of 84·9 (83·1–86·7). SDG index values in countries assessed at the subnational level varied substantially, particularly in China and India, although scores in Japan and the UK were more homogeneous. Indicators also varied by SDI quintile and sex, with males having worse outcomes than females for non-communicable disease (NCD) mortality, alcohol use, and smoking, among others. Most countries were projected to have a higher health-related SDG index in 2030 than in 2017, while country-level probabilities of attainment by 2030 varied widely by indicator. Under-5 mortality, neonatal mortality, maternal mortality ratio, and malaria indicators had the most countries with at least 95% probability of target attainment. Other indicators, including NCD mortality and suicide mortality, had no countries projected to meet corresponding SDG targets on the basis of projected mean values for 2030 but showed some probability of attainment by 2030. For some indicators, including child malnutrition, several infectious diseases, and most violence measures, the annualised rates of change required to meet SDG targets far exceeded the pace of progress achieved by any country in the recent past. We found that applying the mean global annualised rate of change to indicators without defined targets would equate to about 19% and 22% reductions in global smoking and alcohol consumption, respectively; a 47% decline in adolescent birth rates; and a more than 85% increase in health worker density per 1000 population by 2030. Interpretation: The GBD study offers a unique, robust platform for monitoring the health-related SDGs across demographic and geographic dimensions. Our findings underscore the importance of increased collection and analysis of disaggregated data and highlight where more deliberate design or targeting of interventions could accelerate progress in attaining the SDGs. Current projections show that many health-related SDG indicators, NCDs, NCD-related risks, and violence-related indicators will require a concerted shift away from what might have driven past gains—curative interventions in the case of NCDs—towards multisectoral, prevention-oriented policy action and investments to achieve SDG aims. Notably, several targets, if they are to be met by 2030, demand a pace of progress that no country has achieved in the recent past. The future is fundamentally uncertain, and no model can fully predict what breakthroughs or events might alter the course of the SDGs. What is clear is that our actions—or inaction—today will ultimately dictate how close the world, collectively, can get to leaving no one behind by 2030.
  •  
2.
  • de Erausquin, Gabriel A, et al. (author)
  • Chronic neuropsychiatric sequelae of SARS-CoV-2: Protocol and methods from the Alzheimer's Association Global Consortium.
  • 2022
  • In: Alzheimer's & dementia (New York, N. Y.). - : Wiley. - 2352-8737. ; 8:1
  • Journal article (peer-reviewed)abstract
    • Coronavirus disease 2019 (COVID-19) has caused >3.5 million deaths worldwide and affected >160 million people. At least twice as many have been infected but remained asymptomatic or minimally symptomatic. COVID-19 includes central nervous system manifestations mediated by inflammation and cerebrovascular, anoxic, and/or viral neurotoxicity mechanisms. More than one third of patients with COVID-19 develop neurologic problems during the acute phase of the illness, including loss of sense of smell or taste, seizures, and stroke. Damage or functional changes to the brain may result in chronic sequelae. The risk of incident cognitive and neuropsychiatric complications appears independent from the severity of the original pulmonary illness. It behooves the scientific and medical community to attempt to understand the molecular and/or systemic factors linking COVID-19 to neurologic illness, both short and long term.This article describes what is known so far in terms of links among COVID-19, the brain, neurological symptoms, and Alzheimer's disease (AD) and related dementias. We focus on risk factors and possible molecular, inflammatory, and viral mechanisms underlying neurological injury. We also provide a comprehensive description of the Alzheimer's Association Consortium on Chronic Neuropsychiatric Sequelae of SARS-CoV-2 infection (CNS SC2) harmonized methodology to address these questions using a worldwide network of researchers and institutions.Successful harmonization of designs and methods was achieved through a consensus process initially fragmented by specific interest groups (epidemiology, clinical assessments, cognitive evaluation, biomarkers, and neuroimaging). Conclusions from subcommittees were presented to the whole group and discussed extensively. Presently data collection is ongoing at 19 sites in 12 countries representing Asia, Africa, the Americas, and Europe.The Alzheimer's Association Global Consortium harmonized methodology is proposed as a model to study long-term neurocognitive sequelae of SARS-CoV-2 infection.The following review describes what is known so far in terms of molecular and epidemiological links among COVID-19, the brain, neurological symptoms, and AD and related dementias (ADRD)The primary objective of this large-scale collaboration is to clarify the pathogenesis of ADRD and to advance our understanding of the impact of a neurotropic virus on the long-term risk of cognitive decline and other CNS sequelae. No available evidence supports the notion that cognitive impairment after SARS-CoV-2 infection is a form of dementia (ADRD or otherwise). The longitudinal methodologies espoused by the consortium are intended to provide data to answer this question as clearly as possible controlling for possible confounders. Our specific hypothesis is that SARS-CoV-2 triggers ADRD-like pathology following the extended olfactory cortical network (EOCN) in older individuals with specific genetic susceptibility.The proposed harmonization strategies and flexible study designs offer the possibility to include large samples of under-represented racial and ethnic groups, creating a rich set of harmonized cohorts for future studies of the pathophysiology, determinants, long-term consequences, and trends in cognitive aging, ADRD, and vascular disease.We provide a framework for current and future studies to be carried out within the Consortium. and offers a "green paper" to the research community with a very broad, global base of support, on tools suitable for low- and middle-income countries aimed to compare and combine future longitudinal data on the topic.The Consortium proposes a combination of design and statistical methods as a means of approaching causal inference of the COVID-19 neuropsychiatric sequelae. We expect that deep phenotyping of neuropsychiatric sequelae may provide a series of candidate syndromes with phenomenological and biological characterization that can be further explored. By generating high-quality harmonized data across sites we aim to capture both descriptive and, where possible, causal associations.
  •  
3.
  • Derrick, John, et al. (author)
  • Property-Based Testing - The ProTest Project
  • 2010
  • In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Berlin, Heidelberg : Springer Berlin Heidelberg. - 1611-3349 .- 0302-9743. ; 6286, s. 250-271
  • Journal article (peer-reviewed)abstract
    • The ProTest project is an FP7 STREP on property based testing. The purpose of the project is to develop software engineering approaches to improve reliability of service-oriented networks; support fault-finding and diagnosis based on specified properties of the system. And to do so we will build automated tools that will generate and run tests, monitor execution at run-time, and log events for analysis.The Erlang / Open Telecom Platform has been chosen as our initial implementation vehicle due to its robustness and reliability within the telecoms sector. It is noted for its success in the ATM telecoms switches by Ericsson, one of the project partners, as well as for multiple other uses such as in facebook, yahoo etc. In this paper we provide an overview of the project goals, as well as detailing initial progress in developing property based testing techniques and tools for the concurrent functional programming language Erlang.
  •  
4.
  • Li, He, et al. (author)
  • Identification of a Sjögren's syndrome susceptibility locus at OAS1 that influences isoform switching, protein expression, and responsiveness to type I interferons
  • 2017
  • In: PLOS Genetics. - : PUBLIC LIBRARY SCIENCE. - 1553-7390 .- 1553-7404. ; 13:6
  • Journal article (peer-reviewed)abstract
    • Sjogren's syndrome (SS) is a common, autoimmune exocrinopathy distinguished by keratoconjunctivitis sicca and xerostomia. Patients frequently develop serious complications including lymphoma, pulmonary dysfunction, neuropathy, vasculitis, and debilitating fatigue. Dysregulation of type I interferon (IFN) pathway is a prominent feature of SS and is correlated with increased autoantibody titers and disease severity. To identify genetic determinants of IFN pathway dysregulation in SS, we performed cis-expression quantitative trait locus (eQTL) analyses focusing on differentially expressed type I IFN-inducible transcripts identified through a transcriptome profiling study. Multiple cis-eQTLs were associated with transcript levels of 2'-5'-oligoadenylate synthetase 1 (OAS1) peaking at rs10774671 (PeQTL = 6.05 x 10(-14)). Association of rs10774671 with SS susceptibility was identified and confirmed through meta-analysis of two independent cohorts (P-meta = 2.59 x 10(-9); odds ratio = 0.75; 95% confidence interval = 0.66-0.86). The risk allele of rs10774671 shifts splicing of OAS1 from production of the p46 isoform to multiple alternative transcripts, including p42, p48, and p44. We found that the isoforms were differentially expressed within each genotype in controls and patients with and without autoantibodies. Furthermore, our results showed that the three alternatively spliced isoforms lacked translational response to type I IFN stimulation. The p48 and p44 isoforms also had impaired protein expression governed by the 3' end of the transcripts. The SS risk allele of rs10774671 has been shown by others to be associated with reduced OAS1 enzymatic activity and ability to clear viral infections, as well as reduced responsiveness to IFN treatment. Our results establish OAS1 as a risk locus for SS and support a potential role for defective viral clearance due to altered IFN response as a genetic pathophysiological basis of this complex autoimmune disease.
  •  
5.
  • Liu, Ke, et al. (author)
  • X Chromosome Dose and Sex Bias in Autoimmune Diseases
  • 2016
  • In: Arthritis & Rheumatology. - : WILEY-BLACKWELL. - 2326-5191 .- 2326-5205. ; 68:5, s. 1290-1300
  • Journal article (peer-reviewed)abstract
    • Objective. More than 80% of autoimmune disease predominantly affects females, but the mechanism for this female bias is poorly understood. We suspected that an X chromosome dose effect accounts for this, and we undertook this study to test our hypothesis that trisomy X (47, XXX; occurring in similar to 1 in 1,000 live female births) would be increased in patients with female-predominant diseases (systemic lupus erythematosus [SLE], primary Sjogrens syndrome [SS], primary biliary cirrhosis, and rheumatoid arthritis [RA]) compared to patients with diseases without female predominance (sarcoidosis) and compared to controls. Methods. All subjects in this study were female. We identified subjects with 47, XXX using aggregate data from single-nucleotide polymorphism arrays, and, when possible, we confirmed the presence of 47, XXX using fluorescence in situ hybridization or quantitative polymerase chain reaction. Results. We found 47, XXX in 7 of 2,826 SLE patients and in 3 of 1,033 SS patients, but in only 2 of 7,074 controls (odds ratio in the SLE and primary SS groups 8.78 [95% confidence interval 1.67-86.79], P = 0.003 and odds ratio 10.29 [95% confidence interval 1.18-123.47], P = 0.02, respectively). One in 404 women with SLE and 1 in 344 women with SS had 47, XXX. There was an excess of 47, XXX among SLE and SS patients. Conclusion. The estimated prevalence of SLE and SS in women with 47, XXX was similar to 2.5 and similar to 2.9 times higher, respectively, than that in women with 46, XX and similar to 25 and similar to 41 times higher, respectively, than that in men with 46, XY. No statistically significant increase of 47, XXX was observed in other female-biased diseases (primary biliary cirrhosis or RA), supporting the idea of multiple pathways to sex bias in autoimmunity.
  •  
6.
  •  
7.
  • Arts, Thomas, 1969, et al. (author)
  • Accelerating race condition detection through procrastination
  • 2011
  • In: Proceedings of the 2011 ACM SIGPLAN Erlang Workshop, Tokyo, 23 September 2011. - New York, NY, USA : ACM. - 9781450308595 ; , s. 14-22
  • Conference paper (peer-reviewed)abstract
    • Race conditions are notoriously frustrating to find, and good tools can help. The main difficulty is reliably provoking the race condition. In previous work we presented a randomising scheduler for Erlang that helps with this task. In a language without pervasive shared mutable state, such as Erlang, performing scheduling decisions at random uncovers race conditions surprisingly well. However, it is not always enough. We describe a technique, procrastination, that aims to provoke race conditions more often than by random scheduling alone. It works by running the program and looking for pairs of events that might interfere, such as two message sends to the same process. Having found such a pair of events, we re-run the program but try to provoke a race condition by reversing the order of the two events. We apply our technique to a piece of industrial Erlang code. Compared to random scheduling alone, procrastination allows us to find minimal failing test cases more reliably and more quickly.
  •  
8.
  • Arts, Thomas, 1969, et al. (author)
  • Graphical editing support for QuickCheck models
  • 2015
  • In: 2015 IEEE 8th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2015 - Proceedings. - 2159-4848. - 9781479918850
  • Conference paper (peer-reviewed)abstract
    • QuickCheck can test a system by specifying a state machine for the API exported by that system. This state machine specification includes a list of possible API calls. Each call is accompanied by a precondition, a postcondition, a generator for the arguments, and a description of how the state is changed. Based on this specification QuickCheck generates a random sequence of API calls. The preconditions ensure that a generated sequence is valid, and the postconditions check that the system behaves as expected. Many systems require an initialisation call before other calls, describing the transition from an uninitialized to an initialised state. QuickCheck offers two ways of specifying transitions between states: using preconditions or a finite state machine abstraction. In this paper we show, by means of an example, that the latter approach is to be preferred. In addition, we present a recent extension to QuickCheck that allows a user to graphically create and edit a finite state machine specification. This extension simplifies and speeds up the specification of a finite state machine, which can be regarded as a formal model of the system, considerably. The graphical representation makes the formal model easier to understand, and visualises the distribution of API calls. Moreover, the extension allows the user to change this distribution.
  •  
9.
  • Arts, Thomas, 1969, et al. (author)
  • Testing AUTOSAR software with QuickCheck
  • 2015
  • In: 2015 IEEE 8th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2015 - Proceedings. - 2159-4848. - 9781479918850
  • Conference paper (peer-reviewed)abstract
    • AUTOSAR (AUTomotive Open System ARchitecture) is an evolving standard for embedded software in vehicles, defined by the automotive industry, and implemented by many different vendors. On behalf of Volvo Cars, we have developed model-based acceptance tests for some critical AUTOSAR components, to guarantee that implementations from different vendors are compatible. We translated over 3000 pages of textual specifications into QuickCheck models, and tested many different implementations using large volumes of generated tests. This exposed over 200 issues, which we raised with Volvo and the software vendors. Compared to an earlier manual approach, ours is more efficient, more effective, and more correct.
  •  
10.
  • Arts, Thomas, 1969, et al. (author)
  • Testing Implementations of Formally Verified Algorithms
  • 2005
  • In: Proceedings of the 5th Conference on Software Engineering Research and Practice in Sweden.
  • Conference paper (peer-reviewed)abstract
    • Algorithms described in literature can often be used to solvepractical, industrial problems. In safety-critical industrialsettings, algorithms that have been formally verified should be evenmore attractive candidates for implementations. Nevertheless, weobserve little transfer of algorithms from research papers intoproducts. In this paper we describe a case study on the implementationof algorithms for the widely known and broadly studied problem ofleader election. Despite thousands of articles on that topic, itstill requires a lot of engineering to select the relevant articles,and get a correct algorithm implemented in an industrial setting.Modifications are necessary to meet all requirements. We proposeadaptation and testing of formal properties as a realistic and cheap way tocheck the correctness of the modifications, since performing a formalproof seems unrealistic for industrial systems. We show how we use theproperties stated in the articles to guide our tests.
  •  
11.
  • Björk, Magnus, 1977, et al. (author)
  • Exposed Datapath for Efficient Computing
  • 2006
  • Reports (other academic/artistic)abstract
    • We introduce FlexCore, which is the first exemplar of a processor based on the FlexSoC processor paradigm. TheFlexCore utilizes an exposed datapath for increased performance. Microbenchmarks yield a performance boost of a factor of two over a traditional five-stage pipeline with the same functional units as the FlexCore.We describe our approach to compiling for the FlexCore.A flexible interconnect allows the FlexCore datapath to bedynamically reconfigured as a consequence of code generation. Additionally, specialized functional units may be introduced and utilized within the same architecture and compilation framework. The exposed datapath requires a wide control word. The conducted evaluation of two micro benchmarks confirms that this increases the instruction bandwidth and memory footprint. This calls for an efficient instruction decoding as proposed in the FlexSoC paradigm.
  •  
12.
  •  
13.
  • Byrski, Aleksander, et al. (author)
  • Special section on functional paradigm for high performance computing
  • 2018
  • In: Future Generation Computer Systems. - : Elsevier BV. - 0167-739X. ; 79, s. 643-644
  • Journal article (other academic/artistic)abstract
    • This paper is a foreword for the Special Section of Future Generation Computing Systems journal on Functional Paradigm for High-Performance Computing connected with Lambda Days 2017 Conference. In this paper the substance of the special section is located in the current state of the art and the overviews of the four papers constituting this special section are given. © 2017
  •  
14.
  • Danielsson, Nils Anders, 1979, et al. (author)
  • Fast and Loose Reasoning is Morally Correct
  • 2006
  • In: Conference record of the 33rd ACM SIGPLAN-SIGACT symposium on Principles of programming languages (POPL 2006). - 1595930272 ; , s. 206-217
  • Conference paper (peer-reviewed)abstract
    • Functional programmers often reason about programs as if they were written in a total language, expecting the results to carry over to non-total (partial) languages. We justify such reasoning.Two languages are defined, one total and one partial, with identical syntax. The semantics of the partial language includes partial and infinite values, and all types are lifted, including the function spaces. A partial equivalence relation (PER) is then defined, the domain of which is the total subset of the partial language. For types not containing function spaces the PER relates equal values, and functions are related if they map related values to related values.It is proved that if two closed terms have the same semantics in the total language, then they have related semantics in the partial language. It is also shown that the PER gives rise to a bicartesian closed category which can be used to reason about values in the domain of the relation.
  •  
15.
  • Fetscher, B., et al. (author)
  • Making Random Judgments: Automatically Generating Well-Typed Terms from the Definition of a Type-System
  • 2015
  • In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Berlin, Heidelberg : Springer Berlin Heidelberg. - 1611-3349 .- 0302-9743. ; 9032, s. 383-405
  • Journal article (peer-reviewed)abstract
    • This paper presents a generic method for randomly generating well-typed expressions. It starts from a specification of a typing judgment in PLT Redex and uses a specialized solver that employs randomness to find many different valid derivations of the judgment form. Our motivation for building these random terms is to more effectively falsify conjectures as part of the tool-support for semantics models specified in Redex. Accordingly, we evaluate the generator against the other available methods for Redex, as well as the best available custom well-typed term generator. Our results show that our new generator is much more effective than generation techniques that do not explicitly take types into account and is competitive with generation techniques that do, even though they are specialized to particular type-systems and ours is not.
  •  
16.
  • Gerdes, Alex, 1978, et al. (author)
  • Linking Unit Tests and Properties
  • 2015
  • In: Erlang Workshop, 2015-09-04, Vancouver. - New York, NY, USA : ACM. - 9781450338059 ; , s. 19-26
  • Conference paper (peer-reviewed)abstract
    • QuickCheck allows you to validate if your software has particular desired properties. These properties can be regarded as an abstraction over many unit tests. QuickCheck uses generated random input data to validate such a property. If QuickCheck finds a counterexample it becomes immediately clear what we are testing. If however all tests pass it is not immediately clear what we have tested, since we don’t see the actual generated test cases. In this case it is good to think about what we have actually tested. QuickCheck offers the possibility to gather statistics about the test cases, which is very insightful. Still, after inspecting the test data distribution many QuickCheck users wonder if a particular unit test case has been tested. Often a property is developed with a certain set of unit tests in mind. We have developed a tool that check if a given unit test can be generated by a property. This tool helps in understanding a property, making it easier to see what it really tests and hence judge its quality.
  •  
17.
  • Gerdes, Alex, 1978, et al. (author)
  • Understanding formal specifications through good examples
  • 2018
  • In: Erlang 2018 - Proceedings of the 17th ACM SIGPLAN International Workshop on Erlang, co-located with ICFP 2018. - New York, NY, USA : ACM. ; , s. 13-24
  • Conference paper (peer-reviewed)abstract
    • Formal specifications of software applications are hard to understand, even for domain experts. Because a formal specification is abstract, reading it does not immediately convey the expected behaviour of the software. Carefully chosen examples of the software’s behaviour, on the other hand, are concrete and easy to understand—but poorly-chosen examples are more confusing than helpful. In order to understand formal specifications, software developers need good examples. We have created a method that automatically derives a suite of good examples from a formal specification. Each example is judged by our method to illustrate one feature of the specification. The generated examples give users a good understanding of the behaviour of the software. We evaluated our method by measuring how well students understood an API when given different sets of examples; the students given our examples showed significantly better understanding.
  •  
18.
  • Goldstein, Harrison, et al. (author)
  • Do Judge a Test by its Cover: Combining Combinatorial and Property-Based Testing
  • 2021
  • In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Cham : Springer International Publishing. - 1611-3349 .- 0302-9743. ; 12648 LNCS, s. 264-291
  • Conference paper (peer-reviewed)abstract
    • Property-based testing uses randomly generated inputs to validate high-level program specifications. It can be shockingly effective at finding bugs, but it often requires generating a very large number of inputs to do so. In this paper, we apply ideas from combinatorial testing, a powerful and widely studied testing methodology, to modify the distributions of our random generators so as to find bugs with fewer tests. The key concept is combinatorial coverage, which measures the degree to which a given set of tests exercises every possible choice of values for every small combination of input features. In its “classical” form, combinatorial coverage only applies to programs whose inputs have a very particular shape—essentially, a Cartesian product of finite sets. We generalize combinatorial coverage to the richer world of algebraic data types by formalizing a class of sparse test descriptions based on regular tree expressions. This new definition of coverage inspires a novel combinatorial thinning algorithm for improving the coverage of random test generators, requiring many fewer tests to catch bugs. We evaluate this algorithm on two case studies, a typed evaluator for System F terms and a Haskell compiler, showing significant improvements in both.
  •  
19.
  •  
20.
  • Hritcu, C., et al. (author)
  • Testing noninterference, quickly
  • 2013
  • In: Proceedings of the ACM SIGPLAN International Conference on Functional Programming, ICFP. - New York, NY, USA : ACM. - 9781450323260 ; 2013, s. 455-468
  • Conference paper (peer-reviewed)abstract
    • Information-flow control mechanisms are difficult to design and labor intensive to prove correct. To reduce the time wasted on proof attempts doomed to fail due to broken definitions, we advocate modern random testing techniques for finding counterexamples during the design process. We show how to use QuickCheck, a property-based random-testing tool, to guide the design of a simple information-flow abstract machine. We find that both sophisticated strategies for generating well-distributed random programs and readily falsifiable formulations of noninterference properties are critically important. We propose several approaches and evaluate their effectiveness on a collection of injected bugs of varying subtlety. We also present an effective technique for shrinking large counterexamples to minimal, easily comprehensible ones. Taken together, our best methods enable us to quickly and automatically generate simple counterexamples for all these bugs.
  •  
21.
  • Hritcu, C., et al. (author)
  • Testing noninterference, quickly
  • 2016
  • In: Journal of Functional Programming. - : Cambridge University Press (CUP). - 1469-7653 .- 0956-7968. ; 26
  • Journal article (peer-reviewed)abstract
    • Information-flow control mechanisms are difficult both to design and to prove correct. To reduce the time wasted on doomed proof attempts due to broken definitions, we advocate modern random-testing techniques for finding counterexamples during the design process. We show how to use QuickCheck, a property-based random-testing tool, to guide the design of increasingly complex information-flow abstract machines, leading up to a sophisticated register machine with a novel and highly permissive flow-sensitive dynamic enforcement mechanism that is sound in the presence of first-class public labels. We find that both sophisticated strategies for generating well-distributed random programs and readily falsifiable formulations of noninterference properties are critically important for efficient testing. We propose several approaches and evaluate their effectiveness on a collection of injected bugs of varying subtlety. We also present an effective technique for shrinking large counterexamples to minimal, easily comprehensible ones. Taken together, our best methods enable us to quickly and automatically generate simple counterexamples for more than 45 bugs. Moreover, we show how testing guides the discovery of the sophisticated invariants needed for the noninterference proof of our most complex machine.
  •  
22.
  • Hu, Z. J., et al. (author)
  • How functional programming mattered
  • 2015
  • In: National Science Review. - : Oxford University Press (OUP). - 2095-5138 .- 2053-714X. ; 2:3, s. 349-370
  • Research review (peer-reviewed)abstract
    • In 1989 when functional programming was still considered a niche topic, Hughes wrote a visionary paper arguing convincingly 'why functional programming matters'. More than two decades have passed. Has functional programming really mattered? Our answer is a resounding 'Yes!'. Functional programming is now at the forefront of a new generation of programming technologies, and enjoying increasing popularity and influence. In this paper, we review the impact of functional programming, focusing on how it has changed the way we may construct programs, the way we may verify programs, and fundamentally the way we may think about programs.
  •  
23.
  • Hughes, John, 1958, et al. (author)
  • Erlang/QuickCheck
  • 2003
  • In: Ninth International Erlang/OTP User Conference, Älvsjö, Sweden. November 2003.
  • Conference paper (other academic/artistic)
  •  
24.
  • Hughes, John, 1958 (author)
  • Experiences from teaching functional programming at Chalmers
  • 2008
  • In: SIGPLAN Notices (ACM Special Interest Group on Programming Languages). - : Association for Computing Machinery (ACM). - 0730-8566 .- 0362-1340 .- 1558-1160. ; 43:11, s. 77-80
  • Journal article (peer-reviewed)abstract
    • John Hughes shared his experiences of teaching functional programming at Chalmers University in Gothenburg along with the successes and the problems he faced. His highest priority was to convince students that they could write real, interesting programs in Haskell by the end of the first course. He eliminated all the course material directly irrelevant to programming like a section on program proofs. He used to taught the material in a different order to most functional programming texts, like introducing Haskell input/output in the first lecture. He also included two lectures on GUI programming (using wxHaskell), building a simple straight-line-diagram editor in a couple of pages of code. He also introduced QuickCheck (random property-based) testing as part of the course, teaching students first to write the left-hand-side and type of a function, then its property, and finally its right-hand-side.
  •  
25.
  • Hughes, John, 1958 (author)
  • Experiences with QuickCheck: testing the hard stuff and staying sane
  • 2016
  • In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Cham : Springer International Publishing. - 1611-3349 .- 0302-9743. - 9783319309354 ; 9600, s. 169-186
  • Conference paper (peer-reviewed)abstract
    • This is not a typical scientific paper. It does not present a new method, with careful experiments to evaluate it, and detailed references to related work. Rather, it recounts some of my experiences over the last 15 years, working with QuickCheck, and its purpose is as much to entertain as to inform. QuickCheck is a random testing tool that Koen Claessen and I invented, which has since become the testing tool of choice in the Haskell community. In 2006 I co-founded Quviq, to develop and market an Erlang version, which we have since applied for a wide variety of customers, encountering many fascinating testing problems as a result. This paper introduces Quviq QuickCheck, and in particular the extensions made for testing stateful code, via a toy example in C. It goes on to describe the largest QuickCheck project to date, which developed acceptance tests for AUTOSAR C code on behalf of Volvo Cars. Finally it explains a race detection method that nailed a notorious bug plaguing Klarna, northern Europe’s market leader in invoicing systems for e-commerce. Together, these examples give a reasonable overview of the way QuickCheck has been used in industrial practice.
  •  
26.
  • Hughes, John, 1958, et al. (author)
  • Find More Bugs with QuickCheck!
  • 2016
  • In: 11th IEEE/ACM International Workshop on Automation of Software Test (AST 2016). - New York, NY, USA : ACM. - 9781450341516 ; , s. 71-77
  • Conference paper (peer-reviewed)abstract
    • Random testing is increasingly popular and successful, but tends to spend most time rediscovering the ``most probable bugs'' again and again, reducing the value of long test runs on buggy software. We present a new automated method to adapt random test case generation so that already-discovered bugs are avoided, and further test effort can bedevoted to searching for new bugs instead. We evaluate ourmethod primarily against RANDOOP-style testing, in three different settings our method avoids rediscovering bugs more successfully than RANDOOP and in some cases finds bugs that RANDOOP did not find at all.
  •  
27.
  • Hughes, John, 1958, et al. (author)
  • FlexSoC: Combining Flexibility and Efficiency in SoC Designs
  • 2003
  • In: Proceedings of 21st Norchip Conference. ; Riga, Latvia, s. 52-55
  • Conference paper (peer-reviewed)abstract
    • The FlexSoC project aims at developing a designframework that makes it possible to combine the computational speed and energy-efficiency of specialized hardware accelerators with the flexibility of programmable processors. FlexSoC approaches this problem by defining auniform programming interface across the heterogeneousstructure of processing resources. This paper justifies ourapproach and also discusses the central research issueswe will focus on in the areas of VLSI design, computerarchitecture, and programming and verification.
  •  
28.
  • Hughes, John, 1958 (author)
  • How to Specify It!: A Guide to Writing Properties of Pure Functions
  • 2020
  • In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Cham : Springer International Publishing. - 1611-3349 .- 0302-9743. ; 12053, s. 58-83
  • Conference paper (peer-reviewed)abstract
    • Property-based testing tools test software against a specification, rather than a set of examples. This tutorial paper presents five generic approaches to writing such specifications (for purely functional code). We discuss the costs, benefits, and bug-finding power of each approach, with reference to a simple example with eight buggy variants. The lessons learned should help the reader to develop effective property-based tests in the future.
  •  
29.
  • Hughes, John, 1958, et al. (author)
  • How well are your requirements tested?
  • 2016
  • In: Proceedings - 2016 IEEE International Conference on Software Testing, Verification and Validation, ICST 2016. - 9781509018260 ; , s. 244-254
  • Conference paper (peer-reviewed)abstract
    • We address the question: to what extent does covering requirements ensure that a test suite is effective at revealing faults? To answer it, we generate minimal test suites that cover all requirements, and assess the tests they contain. They turn out to be very poor-ultimately because the notion of covering a requirement is more subtle than it appears to be at first. We propose several improvements to requirements tracking during testing, which enable us to generate minimal test suites close to what a human developer would write. However, there remains a class of plausible bugs which such suites are very poor at finding, but which random testing finds rather easily.
  •  
30.
  • Hughes, John, 1958, et al. (author)
  • Industrial Challenges in Semantics
  • 2003
  • Reports (other academic/artistic)abstract
    • This document reports on the outcome of consultations with our colleagues in industry, with the aim of identifying critical areas where programming language research can make a contribution. We asked each person, If you could supply a list of problems for Europe's programming language researchers to work on, what would it be? Their responses were broadly in agreement, so it is with some confidence that we identify the critical areas described in this document.
  •  
31.
  • Hughes, John, 1958, et al. (author)
  • Mysteries of Dropbox: Property-based Testing of a Distributed Synchronization Service
  • 2016
  • In: Proceedings - 2016 IEEE International Conference on Software Testing, Verification and Validation, ICST 2016. - 9781509018260 ; , s. 135-145
  • Conference paper (peer-reviewed)abstract
    • File synchronization services such as Dropbox are used by hundreds ofmillions of people to replicate vital data. Yet rigorous models of theirbehavior are lacking. We present the first formal---and testable---model ofthe core behavior of a modern file synchronizer, and we use it to discoversurprising behavior in two widely deployed synchronizers. Our model isbased on a technique for testing nondeterministic systems that avoidsrequiring that the system's internal choices be made visible to the testingframework.
  •  
32.
  •  
33.
  • Hughes, John, 1958 (author)
  • Software testing with QuickCheck
  • 2010
  • In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Berlin, Heidelberg : Springer Berlin Heidelberg. - 1611-3349 .- 0302-9743. - 9783642176845 ; 6299, s. 183-223
  • Conference paper (peer-reviewed)abstract
    • This paper presents a tutorial, with extensive exercises, in the use of Quviq QuickCheck - a property-based testing tool for Erlang, which enables developers to formulate formal specifications of their code and to use them for testing. We cover the basic concepts of properties and test-data generators, properties for testing abstract data types, and a state-machine modelling approach to testing stateful systems. Finally we discuss applications of QuickCheck in industry.
  •  
34.
  • Hughes, John, 1958, et al. (author)
  • Specification-Based Testing with QuickCheck
  • 2003
  • In: The fun of programming - edited by Jeremy Gibbons and Oege de Moor. - 1403907722 ; , s. 17-40
  • Book chapter (other academic/artistic)
  •  
35.
  • Hughes, John, 1958, et al. (author)
  • Testing a database for race conditions with QuickCheck: None
  • 2011
  • In: Proceedings of the 2011 ACM SIGPLAN Erlang Workshop, Tokyo, 23 September 2011. - New York, NY, USA : ACM. - 9781450308595 ; , s. 72-77
  • Conference paper (peer-reviewed)abstract
    • In 2009, Claessen et al. presented a way of testing for race conditions in Erlang programs, using QuickCheck to generate parallel tests, a randomizing scheduler to provoke races, and a sequential consistency condition to detect failures of atomicity [1]. That work used a small industrial prototype as the main example, showing how two race conditions could be detected and diagnosed. In this paper, we apply the same methods to dets, a vital component of the mnesia database system, and more than an order of magnitude larger. dets is known to fail occasionally in production, making it a promising candidate for a race condition hunt. We found five race conditions with relatively little effort, two of which may account for the observed failures in production. We explain how the testing was done, present most of the QuickCheck specification used, and describe the problems we discovered and their causes.
  •  
36.
  • Hughes, John, 1958, et al. (author)
  • Using temporal relations to specify and test an instant messaging server
  • 2010
  • In: Proceedings - International Conference on Software Engineering. - New York, NY, USA : ACM. - 0270-5257. - 9781605589701 ; , s. 95-102
  • Conference paper (other academic/artistic)abstract
    • Asynchronous events are awkward to handle in specification-based testing. State machine specifications become very complex when variable event order, timing constraints, and timing uncertainties must all be captured. We propose an alternative formalism for specifying asynchronous behaviour based on temporal relations, designed to support more declarative and modular specifications. Temporal relations are in a sense a combination of bulk data types and temporal logic. We illustrate the formalism by specifying parts of a simplified instant messaging server, and show that it can handle timing uncertainty very simply. We have implemented the formalism as part of Quviq QuickCheck, a commercial specification-based testing tool, and we describe its application to testing ejabberd, the leading instant messaging server based on the open XMPP protocol. Copyright 2010 ACM.
  •  
37.
  • Karlsson, Stefan, et al. (author)
  • Exploring API behaviours through generated examples
  • 2024
  • In: Software Quality Journal. - : SPRINGER. - 1573-1367 .- 0963-9314. ; 32:2, s. 729-763
  • Journal article (peer-reviewed)abstract
    • Understanding the behaviour of a system's API can be hard. Giving users access to relevant examples of how an API behaves has been shown to make this easier for them. In addition, such examples can be used to verify expected behaviour or identify unwanted behaviours. Methods for automatically generating examples have existed for a long time. However, state-of-the-art methods rely on either white-box information, such as source code, or on formal specifications of the system behaviour. But what if you do not have access to either? This may be the case, for example, when interacting with a third-party API. In this paper, we present an approach to automatically generate relevant examples of behaviours of an API, without requiring either source code or a formal specification of behaviour. Evaluation on an industry-grade REST API shows that our method can produce small and relevant examples that can help engineers to understand the system under exploration.
  •  
38.
  • Lampropoulos, Leonidas, et al. (author)
  • Beginner's luck: a language for property-based generators
  • 2017
  • In: SIGPLAN Notices (ACM Special Interest Group on Programming Languages). - New York, NY, USA : ACM. - 0730-8566. ; 52:1, s. 114-129
  • Conference paper (peer-reviewed)abstract
    • Property-based random testing a la QuickCheck requires building efficient generators for well-distributed random data satisfying complex logical predicates, but writing these generators can be difficult and error prone. We propose a domain-specific language in which generators are conveniently expressed by decorating predicates with lightweight annotations to control both the distribution of generated values and the amount of constraint solving that happens before each variable is instantiated. This language, called Luck, makes generators easier to write, read, and maintain. We give Luck a formal semantics and prove several fundamental properties, including the soundness and completeness of random generation with respect to a standard predicate semantics. We evaluate Luck on common examples from the property-based testing literature and on two significant case studies, showing that it can be used in complex domains with comparable bug-finding effectiveness and a significant reduction in testing code size compared to handwritten generators.
  •  
39.
  • Lindström Claessen, Koen, 1975, et al. (author)
  • Finding Race Conditions in Erlang with Quick Check and PULSE
  • 2009
  • In: ICFP'09: Proceedings of the ACM SIGPLAN International Conference on Functional Programming. - New York, NY, USA : ACM. - 9781605583327 ; , s. 149-160
  • Conference paper (peer-reviewed)abstract
    • We address the problem of testing and debugging concurrent, distributed Erlang applications. In concurrent programs, race conditions are a common class of bugs and are very hard to find in practice. Traditional unit testing is normally unable to help finding all race conditions, because their occurrence depends so much on timing. Therefore, race conditions are often found during system testing, where due to the vast amount of code under test, it is often hard to diagnose the error resulting from race conditions. We present three tools (Quick Check, PULSE, and a visualizer) that in combination can be used to test and debug concurrent programs in unit testing with a much better possibility of detecting race conditions. We evaluate our method on an industrial concurrent case study and illustrate how we find and analyze the race conditions.
  •  
40.
  • Lindström Claessen, Koen, 1975, et al. (author)
  • Finding race conditions in erlang with quickcheck and PULSE
  • 2009
  • In: ACM SIGPLAN Notices. - : Association for Computing Machinery (ACM). - 1523-2867 .- 0362-1340 .- 1558-1160. ; 44:9, s. 149-160
  • Journal article (peer-reviewed)abstract
    • We address the problem of testing and debugging concurrent, distributed,Erlang applications. In concurrent programs, race conditions,are a common class of bugs and are very hard to find in practice.,Traditional unit testing is normally unable to help finding all,race conditions, because their occurrence depends so much on timing.,Therefore, race conditions are often found during system testing,where due to the vast amount of code under test, it is often hard,to diagnose the error resulting from race conditions. We present,three tools (QuickCheck, PULSE, and a visualizer) that in combination,can be used to test and debug concurrent programs in unit,testing with a much better possibility of detecting race conditions.,We evaluate our method on an industrial concurrent case study and,illustrate how we find and analyze the race conditions.
  •  
41.
  • Lindström Claessen, Koen, 1975, et al. (author)
  • QuickCheck: a lightweight tool for random testing of Haskell programs
  • 2011
  • In: SIGPLAN Notices (ACM Special Interest Group on Programming Languages). - : Association for Computing Machinery (ACM). - 0730-8566 .- 0362-1340 .- 1558-1160. ; 46:4, s. 53-64
  • Journal article (peer-reviewed)abstract
    • QuickCheck is a tool which aids the Haskell programmer in formulating and testing properties of programs. Properties are discribed as Haskell functions, and can be automatically tested on random input, but it is also possible to define custom test data generators. We present a number of case studies, in whic hthe tool was successfully used, and also point out some pitfalls to avoid. Random testing is especially suitable for functional programs because properties can be stated at a fine grain. When a function is built from separately tested components, then random testing suffuces to obtain good coverage of the definition under test.
  •  
42.
  • Lindström Claessen, Koen, 1975, et al. (author)
  • QuickCheck: A lightweight tool for random testing of Haskell programs
  • 2000
  • In: 5th ACM SIGPLAN International Conference on Functional Programming (ICFP'00); Montreal, Que, Can; ; 18 September 2000 through 21 September 2000. - : Association for Computing Machinery (ACM). ; 35:9, s. 268-279
  • Conference paper (peer-reviewed)abstract
    • QuickCheck is a tool which aids the Haskell programmer in formulating and testing properties of programs. Properties are described as Haskell functions, and can be automatically tested on random input, but it is also possible to define custom test data generators. We present a number of case studies, in which the tool was successfully used, and also point out some pitfalls to avoid. Random testing is especially suitable for functional programs because properties can be stated at a fine grain. When a function is built from separately tested components, then random testing suffices to obtain good coverage of the definition under test.
  •  
43.
  • Lindström Claessen, Koen, 1975, et al. (author)
  • QuickSpec: Guessing Formal Specifications using Testing
  • 2010
  • In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Berlin, Heidelberg : Springer Berlin Heidelberg. - 1611-3349 .- 0302-9743. - 9783642139765 ; 6143, s. 6-21
  • Conference paper (peer-reviewed)abstract
    • We present QuickSpec, a tool that automatically generates algebraic specifications for sets of pure functions. The tool is based on testing, rather than static analysis or theorem proving. The main challenge QuickSpec faces is to keep the number of generated equations to a minimum while maintaining completeness. We demonstrate how QuickSpec can improve one’s understanding of a program module by exploring the laws that are generated using two case studies: a heap library for Haskell and a fixed-point arithmetic library for Erlang.
  •  
44.
  • Lindström Claessen, Koen, 1975, et al. (author)
  • Ranking programs using black box testing
  • 2010
  • In: Proceedings - International Conference on Software Engineering. - New York, NY, USA : ACM. - 0270-5257. - 9781605589701 ; , s. 103-110
  • Conference paper (peer-reviewed)abstract
    • We present an unbiased method for measuring the relative quality of different solutions to a programming problem. Our method is based on identifying possible bugs from program behaviour through black-box testing. The main motivation for such a method is its use in experimental evaluation of software development methods. We report on the use of our method in a small-scale such experiment, which was aimed at evaluating the effectiveness of property-based testing vs. unit testing in software development. Copyright 2010 ACM.
  •  
45.
  • Lindström Claessen, Koen, 1975, et al. (author)
  • Testing monadic code with QuickCheck
  • 2002
  • In: SIGPLAN Notices (ACM Special Interest Group on Programming Languages). - : Association for Computing Machinery (ACM). - 0730-8566 .- 0362-1340 .- 1558-1160. ; 37:12, s. 47-59
  • Conference paper (peer-reviewed)abstract
    • QuickCheck is a previously published random testing tool for Haskell programs. In this paper we show how to use it for testing monadic code, and in particular imperative code written using the ST monad. QuickCheck tests a program against a specification: we show that QuickCheck's specification language is sufficiently powerful to represent common forms of specifications: algebraic, model-based (both functional and relational), and pre-/post-conditional. Moreover, all these forms of specification can be used directly for testing. We define a new language of monadic properties, and make a link between program testing and the notion of observational equivalence.
  •  
46.
  • López, P.E.M., et al. (author)
  • Principal Type Specialisation
  • 2002
  • In: Proceedings of the ACM SIGPLAN Symposium on Partial Evaluation and Semantics-Based Program Manipulation, Aizu, 12-14 September 2002. - New York, NY, USA : ACM. ; , s. 94-105
  • Conference paper (peer-reviewed)abstract
    • Type specialisation is an approach to program specialisation that works with both a program and its type to produce specialised versions of each. As it combines many powerful features, it appears to be a good framework for automatic program production, - despite the fact that it was designed originally to express optimal specialisation for interpreters written in typed languages. The original specification of type specialisation used a system of rules expressing it as a generalised type system, rather than the usual view of specialisation as generalised evaluation. That system, while powerful, has some weaknesses not widely recognized - the most important being the inability to express principal type specialisations (a principal specialisation is one that is "more general" than any other for a given specialisable term, and from which those can be obtained by a suitable notion of instantiation). This inability is a problem when extending type specialisation to deal with polymorphism or modules. This work presents a different formulation of the system specifying type specialisation capturing the notion of principal specialisation for a language with static constructions and polyvariance. It is a step forward in the study of type specialisation for polymorphic languages and lazy languages, and also permits modularity of specialisation, and better implementations.
  •  
47.
  • McCaddon, Andrew, et al. (author)
  • Transcobalamin polymorphism and serum holo-transcobalamin in relation to Alzheimer's disease
  • 2004
  • In: Dementia and Geriatric Cognitive Disorders. - : S. Karger AG. - 1420-8008 .- 1421-9824. ; 17:3, s. 215-221
  • Journal article (peer-reviewed)abstract
    • Isoforms of the vitamin B<sub>12</sub> carrier protein transcobalamin (TC) might influence its cellular availability and contribute to the association between disrupted single-carbon metabolism and Alzheimer’s disease (AD). We therefore investigated the relationships between the TC 776C>G (Pro259Arg) genetic polymorphism, total serum cobalamin and holo-TC levels, and disease onset in 70 patients with clinically diagnosed AD and 74 healthy elderly controls. TC 776C>G polymorphism was also determined for 94 histopathologically confirmed AD patients and 107 controls. Serum holo-TC levels were significantly higher in TC 776C homozygotes (p = 0.04). Kaplan-Meier survival functions differed between homozygous genotypes (Cox’s F-Test F(42, 46) = 2.1; p = 0.008) and between 776C homozygotes and heterozygotes (Cox’s F test F(46, 108) = 1.7; p = 0.02). Proportionately fewer TC 776C homozygotes appear to develop AD at any given age, but this will require confirmation in a longitudinal study.
  •  
48.
  • Mista, Claudio Agustin, 1991, et al. (author)
  • Branching Processes for QuickCheck Generators
  • 2018
  • In: Haskell 2018 - Proceedings of the 11th ACM SIGPLAN International Symposium on Haskell, co-located with ICFP 2018. - New York, NY, USA : ACM. ; 53:7, s. 1-13
  • Conference paper (peer-reviewed)abstract
    • In QuickCheck (or, more generally, random testing), it is challenging to control random data generators' distributions---specially when it comes to user-defined algebraic data types (ADT). In this paper, we adapt results from an area of mathematics known as branching processes, and show how they help to analytically predict (at compile-time) the expected number of generated constructors, even in the presence of mutually recursive or composite ADTs. Using our probabilistic formulas, we design heuristics capable of automatically adjusting probabilities in order to synthesize generators which distributions are aligned with users' demands. We provide a Haskell implementation of our mechanism in a tool called DRaGeN and perform case studies with real-world applications. When generating random values, our synthesized QuickCheck generators show improvements in code coverage when compared with those automatically derived by state-of-the-art tools.
  •  
49.
  • Mostowski, Wojciech, 1976-, et al. (author)
  • Modelling of Autosar Libraries for Large Scale Testing
  • 2017
  • In: 2nd Workshop on Models for Formal Analysis of Real Systems (MARS 2017). ; 244, s. 184-199
  • Conference paper (peer-reviewed)abstract
    • We demonstrate a specific method and technology for model-based testing of large software projects with the QuickCheck tool using property-based specifications. Our specifications are very precise, state-full models of the software under test (SUT). In our approach we define (a) formal descriptions of valid function call sequences (public API), (b) postconditions that check the validity of each call, and (c) call-out specifications that define and validate external system interactions (SUT calling external API). The QuickCheck tool automatically generates and executes tests from these specifications. Commercially, this method and tool have been used to test large parts of the industrially developed automotive libraries based on the Autosar standard. In this paper, we exemplify our approach with a circular buffer specified by Autosar, to demonstrate the capabilities of the model-based testing method of QuickCheck. Our example is small compared to the commercial QuickCheck models, but faithfully addresses many of the same challenges. © W. Mostowski, T. Arts, J. Hughes.
  •  
50.
  • Palka, Michal, 1983, et al. (author)
  • Testing an Optimising Compiler by Generating Random Lambda Terms
  • 2011
  • In: International Workshop on Automation of Software Test.
  • Conference paper (peer-reviewed)abstract
    • This paper considers random testing of a compiler, usingrandomly generated programs as inputs, and comparing theirbehaviour with and without optimisation. Since the generated programs must compile, then we need to take intoaccount syntax, scope rules, and type checking during ourrandom generation. Doing so, while attaining a good distribution of test data, proves surprisingly subtle; the maincontribution of this paper is a workable solution to this problem. We used it to generate typed functions on lists, whichwe compiled using the Glasgow Haskell compiler, a matureproduction quality Haskell compiler. After around 20,000tests we triggered an optimiser failure, and automaticallysimplified it to a program with just a few constructs.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-50 of 60
Type of publication
conference paper (39)
journal article (17)
reports (2)
research review (1)
book chapter (1)
Type of content
peer-reviewed (48)
other academic/artistic (12)
Author/Editor
Hughes, John, 1958 (55)
Arts, Thomas, 1969 (13)
Lindström Claessen, ... (13)
Smallbone, Nicholas, ... (9)
Norell, Ulf, 1979 (7)
Russo, Alejandro, 19 ... (6)
show more...
Svensson, Hans, 1979 (6)
Palka, Michal, 1983 (5)
Pierce, Benjamin C. (4)
Larsson-Edefors, Per ... (3)
Gerdes, Alex, 1978 (3)
Stenström, Per, 1957 (3)
Svensson, Lars, 1960 (3)
Jeppson, Kjell, 1947 (3)
Sheeran, Mary, 1959 (3)
Lampropoulos, L. (3)
Blennow, Kaj, 1958 (2)
Wang, M. (2)
Witte, Torsten (2)
Nordmark, Gunnel (2)
Wahren-Herlenius, Ma ... (2)
Lessard, Christopher ... (2)
Kelly, Jennifer A. (2)
Kaufman, Kenneth M. (2)
Guthridge, Joel M. (2)
James, Judith A. (2)
Harley, John B. (2)
Gaffney, Patrick M. (2)
Wallace, Daniel J. (2)
Eriksson, Per, 1958- (2)
Jonsson, Roland (2)
Björk, Magnus, 1977 (2)
Själander, Magnus, 1 ... (2)
Mariette, Xavier (2)
Stein, Dan J (2)
Ng, Wan-Fai (2)
Thuresson, Martin, 1 ... (2)
Karlsson, Jonas, 197 ... (2)
Rasmussen, Astrid (2)
Rischmueller, Mauree ... (2)
Brennan, Michael T. (2)
Byrski, Aleksander (2)
Hammond, Kevin (2)
Brugha, Traolach S (2)
Sagar, Rajesh (2)
Radfar, Lida (2)
Stone, Donald U. (2)
Weisman, Michael H. (2)
Huang, Andrew J. W. (2)
Miceli-Richard, Cori ... (2)
show less...
University
Chalmers University of Technology (56)
University of Gothenburg (3)
Karolinska Institutet (3)
Uppsala University (2)
Linköping University (2)
Umeå University (1)
show more...
Halmstad University (1)
Mälardalen University (1)
Lund University (1)
Södertörn University (1)
Högskolan Dalarna (1)
show less...
Language
English (60)
Research subject (UKÄ/SCB)
Natural sciences (53)
Engineering and Technology (11)
Medical and Health Sciences (5)
Social Sciences (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view