SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Willighagen Egon) "

Search: WFRF:(Willighagen Egon)

  • Result 1-37 of 37
Sort/group result
   
EnumerationReferenceCoverFind
1.
  •  
2.
  • Bradley, Jean-Claude, et al. (author)
  • Beautifying Data in the Real World
  • 2009. - 1
  • In: Beautiful Data. - Sebastol, USA : O'Reilly. - 9780596157111 ; , s. 259-278
  • Book chapter (pop. science, debate, etc.)
  •  
3.
  •  
4.
  • Grafström, Roland C, et al. (author)
  • Toward the Replacement of Animal Experiments through the Bioinformatics-driven Analysis of 'Omics' Data from Human Cell Cultures
  • 2015
  • In: ATLA (Alternatives to Laboratory Animals). - : SAGE Publications. - 0261-1929 .- 2632-3559. ; 43:5, s. 325-332
  • Journal article (peer-reviewed)abstract
    • This paper outlines the work for which Roland Grafström and Pekka Kohonen were awarded the 2014 Lush Science Prize. The research activities of the Grafström laboratory have, for many years, covered cancer biology studies, as well as the development and application of toxicity-predictive in vitro models to determine chemical safety. Through the integration of in silico analyses of diverse types of genomics data (transcriptomic and proteomic), their efforts have proved to fit well into the recently-developed Adverse Outcome Pathway paradigm. Genomics analysis within state-of-the-art cancer biology research and Toxicology in the 21st Century concepts share many technological tools. A key category within the Three Rs paradigm is the Replacement of animals in toxicity testing with alternative methods, such as bioinformatics-driven analyses of data obtained from human cell cultures exposed to diverse toxicants. This work was recently expanded within the pan-European SEURAT-1 project (Safety Evaluation Ultimately Replacing Animal Testing), to replace repeat-dose toxicity testing with data-rich analyses of sophisticated cell culture models. The aims and objectives of the SEURAT project have been to guide the application, analysis, interpretation and storage of 'omics' technology-derived data within the service-oriented sub-project, ToxBank. Particularly addressing the Lush Science Prize focus on the relevance of toxicity pathways, a 'data warehouse' that is under continuous expansion, coupled with the development of novel data storage and management methods for toxicology, serve to address data integration across multiple 'omics' technologies. The prize winners' guiding principles and concepts for modern knowledge management of toxicological data are summarised. The translation of basic discovery results ranged from chemical-testing and material-testing data, to information relevant to human health and environmental safety.
  •  
5.
  • Guha, Rajarshi, et al. (author)
  • Collaborative Cheminformatics Applications
  • 2011
  • In: Collaborative Computational Technologies for Biomedical Research. - Hoboken, N.J. : John Wiley & Sons. - 9780470638033
  • Book chapter (other academic/artistic)
  •  
6.
  • Hastings, Janna, et al. (author)
  • The Chemical Information Ontology : provenance and disambiguation for chemical data on the biological Semantic Web
  • 2011
  • In: PLOS ONE. - : Public Library of Science (PLoS). - 1932-6203. ; 6:10, s. e25513-
  • Journal article (peer-reviewed)abstract
    • Cheminformatics is the application of informatics techniques to solve chemical problems in silico. There are many areas in biology where cheminformatics plays an important role in computational research, including metabolism, proteomics, and systems biology. One critical aspect in the application of cheminformatics in these fields is the accurate exchange of data, which is increasingly accomplished through the use of ontologies. Ontologies are formal representations of objects and their properties using a logic-based ontology language. Many such ontologies are currently being developed to represent objects across all the domains of science. Ontologies enable the definition, classification, and support for querying objects in a particular domain, enabling intelligent computer applications to be built which support the work of scientists both within the domain of interest and across interrelated neighbouring domains. Modern chemical research relies on computational techniques to filter and organise data to maximise research productivity. The objects which are manipulated in these algorithms and procedures, as well as the algorithms and procedures themselves, enjoy a kind of virtual life within computers. We will call these information entities. Here, we describe our work in developing an ontology of chemical information entities, with a primary focus on data-driven research and the integration of calculated properties (descriptors) of chemical entities within a semantic web context. Our ontology distinguishes algorithmic, or procedural information from declarative, or factual information, and renders of particular importance the annotation of provenance to calculated data. The Chemical Information Ontology is being developed as an open collaborative project.
  •  
7.
  • Kuhn, Thomas, et al. (author)
  • CDK-Taverna : an open workflow environment for cheminformatics
  • 2010
  • In: BMC Bioinformatics. - : Springer Science and Business Media LLC. - 1471-2105. ; 11, s. 159-
  • Journal article (peer-reviewed)abstract
    • Background Small molecules are of increasing interest for bioinformatics in areas such as metabolomics and drug discovery. The recent release of large open access chemistry databases generates a demand for flexible tools to process them and discover new knowledge. To freely support open science based on these data resources, it is desirable for the processing tools to be open-source and available for everyone. Results Here we describe a novel combination of the workflow engine Taverna and the cheminformatics library Chemistry Development Kit (CDK) resulting in a open source workflow solution for cheminformatics. We have implemented more than 160 different workers to handle specific cheminformatics tasks. We describe the applications of CDK-Taverna in various usage scenarios. Conclusions The combination of the workflow engine Taverna and the Chemistry Development Kit provides the first open source cheminformatics workflow solution for the biosciences. With the Taverna-community working towards a more powerful workflow engine and a more user-friendly user interface, CDK-Taverna has the potential to become a free alternative to existing proprietary workflow tools.
  •  
8.
  • Kyle, Jennifer E., et al. (author)
  • Interpreting the lipidome : bioinformatic approaches to embrace the complexity
  • 2021
  • In: Metabolomics. - : Springer-Verlag New York. - 1573-3882 .- 1573-3890. ; 17:6
  • Research review (peer-reviewed)abstract
    • BACKGROUND: Improvements in mass spectrometry (MS) technologies coupled with bioinformatics developments have allowed considerable advancement in the measurement and interpretation of lipidomics data in recent years. Since research areas employing lipidomics are rapidly increasing, there is a great need for bioinformatic tools that capture and utilize the complexity of the data. Currently, the diversity and complexity within the lipidome is often concealed by summing over or averaging individual lipids up to (sub)class-based descriptors, losing valuable information about biological function and interactions with other distinct lipids molecules, proteins and/or metabolites.AIM OF REVIEW: To address this gap in knowledge, novel bioinformatics methods are needed to improve identification, quantification, integration and interpretation of lipidomics data. The purpose of this mini-review is to summarize exemplary methods to explore the complexity of the lipidome.KEY SCIENTIFIC CONCEPTS OF REVIEW: Here we describe six approaches that capture three core focus areas for lipidomics: (1) lipidome annotation including a resolvable database identifier, (2) interpretation via pathway- and enrichment-based methods, and (3) understanding complex interactions to emphasize specific steps in the analytical process and highlight challenges in analyses associated with the complexity of lipidome data.
  •  
9.
  • Lampa, Samuel, et al. (author)
  • RDFIO : extending Semantic MediaWiki for interoperable biomedical data management
  • 2017
  • In: Journal of Biomedical Semantics. - : Springer Science and Business Media LLC. - 2041-1480. ; 8
  • Journal article (peer-reviewed)abstract
    • BACKGROUND: Biological sciences are characterised not only by an increasing amount but also the extreme complexity of its data. This stresses the need for efficient ways of integrating these data in a coherent description of biological systems. In many cases, biological data needs organization before integration. This is not seldom a collaborative effort, and it is thus important that tools for data integration support a collaborative way of working. Wiki systems with support for structured semantic data authoring, such as Semantic MediaWiki, provide a powerful solution for collaborative editing of data combined with machine-readability, so that data can be handled in an automated fashion in any downstream analyses. Semantic MediaWiki lacks a built-in data import function though, which hinders efficient round-tripping of data between interoperable Semantic Web formats such as RDF and the internal wiki format.RESULTS: To solve this deficiency, the RDFIO suite of tools is presented, which supports importing of RDF data into Semantic MediaWiki, with metadata needed to export it again in the same RDF format, or ontology. Additionally, the new functionality enables mash-ups of automated data imports combined with manually created data presentations. The application of the suite of tools is demonstrated by importing drug discovery related data about rare diseases from Orphanet and acid dissociation constants from Wikidata. The RDFIO suite of tools is freely available for download via pharmb.io/project/rdfio .CONCLUSIONS: Through a set of biomedical demonstrators, it is demonstrated how the new functionality enables a number of usage scenarios where the interoperability of SMW and the wider Semantic Web is leveraged for biomedical data sets, to create an easy to use and flexible platform for exploring and working with biomedical data.
  •  
10.
  • Leist, Marcel, et al. (author)
  • Adverse outcome pathways : opportunities, limitations and open questions
  • 2017
  • In: Archives of Toxicology. - : Springer Science and Business Media LLC. - 0340-5761 .- 1432-0738. ; 91:11, s. 3477-3505
  • Journal article (peer-reviewed)abstract
    • Adverse outcome pathways (AOPs) are a recent toxicological construct that connects, in a formalized, transparent and quality-controlled way, mechanistic information to apical endpoints for regulatory purposes. AOP links a molecular initiating event (MIE) to the adverse outcome (AO) via key events (KE), in a way specified by key event erelationships (KER). Although this approach to formalize mechanistic toxicological information only started in 2010, over 200 AOPs have already been established. At this stage, new requirements arise, such as the need for harmonization and re-assessment, for continuous updating, as well as for alerting about pitfalls, misuses and limits of applicability. In this review, the history of the AOP concept and its most prominent strengths are discussed, including the advantages of a formalized approach, the systematic collection of weight of evidence, the linkage of mechanisms to apical end points, the examination of the plausibility of epidemiological data, the identification of critical knowledge gaps and the design of mechanistic test methods. To prepare the ground for a broadened and appropriate use of AOPs, some widespread misconceptions are explained. Moreover, potential weaknesses and shortcomings of the current AOP rule set are addressed (1) to facilitate the discussion on its further evolution and (2) to better define appropriate vs. less suitable application areas. Exemplary toxicological studies are presented to discuss the linearity assumptions of AOP, the management of event modifiers and compensatory mechanisms, and whether a separation of toxicodynamics from toxicokinetics including metabolism is possible in the framework of pathway plasticity. Suggestions on how to compromise between different needs of AOP stakeholders have been added. A clear definition of open questions and limitations is provided to encourage further progress in the field.
  •  
11.
  • Martens, Marvin, et al. (author)
  • ELIXIR and Toxicology : a community in development
  • 2021
  • In: F1000 Research. - : F1000 Research Ltd. - 2046-1402. ; 10, s. 1129-1129
  • Journal article (peer-reviewed)abstract
    • Toxicology has been an active research field for many decades, with academic, industrial and government involvement. Modern omics and computational approaches are changing the field, from merely disease-specific observational models into target-specific predictive models. Traditionally, toxicology has strong links with other fields such as biology, chemistry, pharmacology and medicine. With the rise of synthetic and new engineered materials, alongside ongoing prioritisation needs in chemical risk assessment for existing chemicals, early predictive evaluations are becoming of utmost importance to both scientific and regulatory purposes. ELIXIR is an intergovernmental organisation that brings together life science resources from across Europe. To coordinate the linkage of various life science efforts around modern predictive toxicology, the establishment of a new ELIXIR Community is seen as instrumental. In the past few years, joint efforts, building on incidental overlap, have been piloted in the context of ELIXIR. For example, the EU-ToxRisk, diXa, HeCaToS, transQST, and the nanotoxicology community have worked with the ELIXIR TeSS, Bioschemas, and Compute Platforms and activities. In 2018, a core group of interested parties wrote a proposal, outlining a sketch of what this new ELIXIR Toxicology Community would look like. A recent workshop (held September 30th to October 1st, 2020) extended this into an ELIXIR Toxicology roadmap and a shortlist of limited investment-high gain collaborations to give body to this new community. This Whitepaper outlines the results of these efforts and defines our vision of the ELIXIR Toxicology Community and how it complements other ELIXIR activities.  
  •  
12.
  • Mohammed Taha, Hiba, et al. (author)
  • The NORMAN Suspect List Exchange (NORMAN-SLE) : facilitating European and worldwide collaboration on suspect screening in high resolution mass spectrometry
  • 2022
  • In: Environmental Sciences Europe. - : Springer. - 2190-4707 .- 2190-4715. ; 34:1
  • Journal article (peer-reviewed)abstract
    • Background: The NORMAN Association (https://www.norman-network.com/) initiated the NORMAN Suspect List Exchange (NORMAN-SLE; https://www.norman-network.com/nds/SLE/) in 2015, following the NORMAN collaborative trial on non-target screening of environmental water samples by mass spectrometry. Since then, this exchange of information on chemicals that are expected to occur in the environment, along with the accompanying expert knowledge and references, has become a valuable knowledge base for “suspect screening” lists. The NORMAN-SLE now serves as a FAIR (Findable, Accessible, Interoperable, Reusable) chemical information resource worldwide.Results: The NORMAN-SLE contains 99 separate suspect list collections (as of May 2022) from over 70 contributors around the world, totalling over 100,000 unique substances. The substance classes include per- and polyfluoroalkyl substances (PFAS), pharmaceuticals, pesticides, natural toxins, high production volume substances covered under the European REACH regulation (EC: 1272/2008), priority contaminants of emerging concern (CECs) and regulatory lists from NORMAN partners. Several lists focus on transformation products (TPs) and complex features detected in the environment with various levels of provenance and structural information. Each list is available for separate download. The merged, curated collection is also available as the NORMAN Substance Database (NORMAN SusDat). Both the NORMAN-SLE and NORMAN SusDat are integrated within the NORMAN Database System (NDS). The individual NORMAN-SLE lists receive digital object identifiers (DOIs) and traceable versioning via a Zenodo community (https://zenodo.org/communities/norman-sle), with a total of > 40,000 unique views, > 50,000 unique downloads and 40 citations (May 2022). NORMAN-SLE content is progressively integrated into large open chemical databases such as PubChem (https://pubchem.ncbi.nlm.nih.gov/) and the US EPA’s CompTox Chemicals Dashboard (https://comptox.epa.gov/dashboard/), enabling further access to these lists, along with the additional functionality and calculated properties these resources offer. PubChem has also integrated significant annotation content from the NORMAN-SLE, including a classification browser (https://pubchem.ncbi.nlm.nih.gov/classification/#hid=101).Conclusions: The NORMAN-SLE offers a specialized service for hosting suspect screening lists of relevance for the environmental community in an open, FAIR manner that allows integration with other major chemical resources. These efforts foster the exchange of information between scientists and regulators, supporting the paradigm shift to the “one substance, one assessment” approach. New submissions are welcome via the contacts provided on the NORMAN-SLE website (https://www.norman-network.com/nds/SLE/).
  •  
13.
  • O'Boyle, Noel, et al. (author)
  • Open Data, Open Source and Open Standards in chemistry : The Blue Obelisk five years on
  • 2011
  • In: Journal of Cheminformatics. - : BioMed Central. - 1758-2946. ; 3, s. 37-
  • Journal article (peer-reviewed)abstract
    • Background: The Blue Obelisk movement was established in 2005 as a response to the lack of Open Data,Open Standards and Open Source (ODOSOS) in chemistry. It aims to make it easier to carry out chemistryresearch by promoting interoperability between chemistry software, encouraging cooperation between OpenSource developers, and developing community resources and Open Standards. Results: This contribution looks back on the work carried out by the Blue Obelisk in the past 5 years and surveysprogress and remaining challenges in the areas of Open Data, Open Standards, and Open Source in chemistry. Conclusions: We show that the Blue Obelisk has been very successful in bringing together researchers anddevelopers with common interests in ODOSOS, leading to development of many useful resources freely availableto the chemistry community
  •  
14.
  • Samwald, Matthias, et al. (author)
  • Linked open drug data for pharmaceutical research and development
  • 2011
  • In: Journal of Cheminformatics. - : Springer Science and Business Media LLC. - 1758-2946. ; 3, s. 19-
  • Journal article (peer-reviewed)abstract
    • There is an abundance of information about drugs available on the Web. Data sources range from medicinal chemistry results, over the impact of drugs on gene expression, to the outcomes of drugs in clinical trials. These data are typically not connected together, which reduces the ease with which insights can be gained. Linking Open Drug Data (LODD) is a task force within the World Wide Web Consortium's (W3C) Health Care and Life Sciences Interest Group (HCLS IG). LODD has surveyed publicly available data about drugs, created Linked Data representations of the data sets, and identified interesting scientific and business questions that can be answered once the data sets are connected. The task force provides recommendations for the best practices of exposing data in a Linked Data representation. In this paper, we present past and ongoing work of LODD and discuss the growing importance of Linked Data as a foundation for pharmaceutical R&D data sharing.
  •  
15.
  • Spjuth, Ola, 1977-, et al. (author)
  • A novel infrastructure for chemical safety predictions with focus on human health
  • 2012
  • In: Toxicology Letters. - : Elsevier BV. - 0378-4274 .- 1879-3169. ; 211:Supplm, s. S59-
  • Journal article (peer-reviewed)abstract
    • A major objective of Computational Toxicology is to provide reliable and useful estimates in silico of (potentially) harmful actions of chemicals in humans. Predictive models are commonly based on in vitro and in vivo data, and aims at supporting risk assessment in various areas, including the environmental protection, food, and pharmaceutical sectors. The field is however hampered by the lack of standards, access to high quality data, validated predictive models, as well as means to connect toxicity data to genomics data.We present a framework and roadmap for a novel public infrastructure for predictive computational toxicology and chemical safety assessment, consisting of: (1) a repository capable of aggregating high quality toxicity data with gene expression data, (2) a repository where scientists can share and download predictive models for chemical safety, and (3) a user-friendly platform which makes the services and resources accessible for the scientific community. Databases under the framework will adhere to open standards and use standardized open exchange formats in order to interoperate with emerging international initiatives, such as the FP7-funded OpenTox and ToxBank projects.The infrastructure will strengthen and facilitate already ongoing activities within in silico toxicology, open up new possibilities for incorporating genomics data in chemicals safety modeling (toxicogenomics), as well as deepen the exploitation of signal transduction networks. The initiative will lay the foundation needed to boost decision support in risk assessment in a wide range of fields, including drug discovery, food safety, as well as agricultural and ecological safety assessment.
  •  
16.
  • Spjuth, Ola, 1977-, et al. (author)
  • Applications of the InChI in cheminformatics with the CDK and Bioclipse
  • 2013
  • In: Journal of Cheminformatics. - : Springer Science and Business Media LLC. - 1758-2946. ; 5:14
  • Journal article (peer-reviewed)abstract
    • BackgroundThe InChI algorithms are written in C++ and not available as Java library. Integration into softwarewritten in Java therefore requires a bridge between C and Java libraries, provided by the Java NativeInterface (JNI) technology.ResultsWe here describe how the InChI library is used in the Bioclipse workbench and the Chemistry Development Kit (CDK) cheminformatics library. To make this possible, a JNI bridge to the InChIlibrary was developed, JNI-InChI, allowing Java software to access the InChI algorithms. By usingthis bridge, the CDK project packages the InChI binaries in a module and offers easy access fromJava using the CDK API. The Bioclipse project packages and offers InChI as a dynamic OSGi bundlethat can easily be used by any OSGi-compliant software, in addition to the regular Java Archive andMaven bundles. Bioclipse itself uses the InChI as a key component and calculates it on the fly whenvisualizing and editing chemical structures. We demonstrate the utility of InChI with various applications in CDK and Bioclipse, such as decision support for chemical liability assessment, tautomergeneration, and for knowledge aggregation using a linked data approach.ConclusionsThese results show that the InChI library can be used in a variety of Java library dependency solutions, making the functionality easily accessible by Java software, such as in the CDK. The applications show various ways the InChI has been used in Bioclipse, to enrich its functionality.
  •  
17.
  • Spjuth, Ola, et al. (author)
  • Bioclipse : an open source workbench for chemo- and bioinformatics
  • 2007
  • In: BMC Bioinformatics. - : Springer Science and Business Media LLC. - 1471-2105. ; 8, s. 59-
  • Journal article (peer-reviewed)abstract
    • BACKGROUND: There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no successful attempts have been made to integrate chemo- and bioinformatics into a single framework. RESULTS: Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. CONCLUSION: Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL), an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at http://www.bioclipse.net.
  •  
18.
  • Spjuth, Ola, 1977-, et al. (author)
  • Bioclipse 2: A scriptable integration platform for the life sciences
  • 2009
  • In: BMC Bioinformatics. - : Springer Science and Business Media LLC. - 1471-2105. ; 10, s. 397-
  • Journal article (peer-reviewed)abstract
    • Background: Contemporary biological research integrates neighboring scientific domains to answer complex ques- tions in fields such as systems biology and drug discovery. This calls for tools that are intuitive to use, yet flexible to adapt to new tasks. Results: Bioclipse is a free, open source workbench with advanced features for the life sciences. Version 2.0 constitutes a complete rewrite of Bioclipse, and delivers a stable, scalable integration platform for developers and an intuitive workbench for end users. All functionality is available both from the graphical user interface and from a built-in novel domain-specific language, supporting the scientist in interdisciplinary research and reproducible analyses through advanced visualization of the inputs and the results. New components for Bioclipse 2 include a rewritten editor for chemical structures, a table for multiple molecules that supports gigabyte-sized files, as well as a graphical editor for sequences and alignments. Conclusions: Bioclipse 2 is equipped with advanced tools required to carry out complex analysis in the fields of bio- and cheminformatics. Developed as a Rich Client based on Eclipse, Bioclipse 2 leverages on today’s powerful desktop computers for providing a responsive user interface, but also takes full advantage of the Web and networked (Web/Cloud) services for more demanding calculations or retrieval of data. That Bioclipse 2 is based on an advanced and widely used service platform ensures wide extensibility, and new algorithms, visualizations as well as scripting commands can easily be added. The intuitive tools for end users and the extensible architecture make Bioclipse 2 ideal for interdisciplinary and integrative research. Bioclipse 2 is released under the Eclipse Public License (EPL), a flexible open source license that allows additional plugins to be of any license. Bioclipse 2 is implemented in Java and supported on all major platforms; Source code and binaries are freely available at http://www.bioclipse.net.
  •  
19.
  • Spjuth, Ola, 1977-, et al. (author)
  • Bioclipse-R : Integrating management and visualization of life science data with statistical analysis
  • 2013
  • In: Bioinformatics. - : Oxford University Press. - 1367-4803 .- 1367-4811. ; 29:2, s. 286-289
  • Journal article (peer-reviewed)abstract
    • Bioclipse, a graphical workbench for the life sciences, provides functionality for managing and visualizing life science data. We introduce Bioclipse-R, which integrates Bioclipse and the statistical programming language R. The synergy between Bioclipse and R is demonstrated by the construction of a decision support system for anticancer drug screening and mutagenicity prediction, which shows how Bioclipse-R can be used to perform complex tasks from within a single software system.
  •  
20.
  • Spjuth, Ola, 1977-, et al. (author)
  • Open source drug discovery with Bioclipse
  • 2012
  • In: Current Topics in Medicinal Chemistry. - : Bentham Science Publishers Ltd.. - 1568-0266 .- 1873-4294. ; 12:18, s. 1980-1986
  • Research review (peer-reviewed)abstract
    • We present the open source components for drug discovery that has been developed and integrated into the graphical workbench Bioclipse. Building on a solid open source cheminformatics core, Bioclipse has advanced functionality for managing and visualizing chemical structures and related information. The features presented here include QSAR/QSPR modeling, various predictive solutions such as decision support for chemical liability assessment, site-of-metabolism prediction, virtual screening, and knowledge discovery and integration. We demonstrate the utility of the described tools with examples from computational pharmacology, toxicology, and ADME. Bioclipse is used in both academia and industry, and is a good example of open source leading to new solutions for drug discovery.
  •  
21.
  • Spjuth, Ola, 1977-, et al. (author)
  • Towards interoperable and reproducible QSAR analyses : Exchange of data sets
  • 2010
  • In: Journal of Cheminformatics. - : BioMed Central. - 1758-2946. ; 2
  • Journal article (peer-reviewed)abstract
    • BACKGROUND: QSAR/QSPR is a widely used method to relate chemical structures and responses based on ex- perimental observations. In QSAR, chemical structures are expressed as descriptors, which are mathematical representations like calculated properties or enumerated fragments. Many existing QSAR data sets are based on a combination of different software tools mixed with in-house developed solutions, with datasets manually assembled in spreadsheets. Currently there exists no agreed-upon definition of descriptors and no standard for exchanging data sets in QSAR, which together with numerous different descriptor implementations makes it a virtually impossible task to reproduce and validate analyses, and significantly hinders collaborations and re-use of data.RESULTS: We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR/QSPR data sets, comprising an open XML format (QSAR-ML) and an open extensible descriptor ontology (Blue Obelisk Descriptor Ontology). The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a data set described by QSAR-ML makes its setup completely reproducible. We also provide an implementation as a set of plugins for Bioclipse that simplifies QSAR data set formation, and allows for exporting in QSAR-ML as well as traditional CSV formats. The implementation facilitates addition of new descriptor implementations, from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services.CONCLUSIONS: Standardized QSAR data sets opens up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible dataset formation, solving the problems of defining which software components were used, their versions, and the case of multiple names for the same descriptor. This makes is easy to join, extend, combine data sets and also to work collectively. The presented Bioclipse plugins equip scientists with intuitive tools that make QSAR-ML widely available for the community.
  •  
22.
  • Spjuth, Ola, 1977-, et al. (author)
  • XMetDB : an open access database for xenobiotic metabolism
  • 2016
  • In: Journal of Cheminformatics. - : Springer Science and Business Media LLC. - 1758-2946. ; 8
  • Journal article (peer-reviewed)abstract
    • Xenobiotic metabolism is an active research topic but the limited amount of openly available high-quality biotransformation data constrains predictive modeling. Current database often default to commonly available information: which enzyme metabolizes a compound, but neither experimental conditions nor the atoms that undergo metabolization are captured. We present XMetDB, an open access database for drugs and other xenobiotics and their respective metabolites. The database contains chemical structures of xenobiotic biotransformations with substrate atoms annotated as reaction centra, the resulting product formed, and the catalyzing enzyme, type of experiment, and literature references. Associated with the database is a web interface for the submission and retrieval of experimental metabolite data for drugs and other xenobiotics in various formats, and a web API for programmatic access is also available. The database is open for data deposition, and a curation scheme is in place for quality control. An extensive guide on how to enter experimental data into is available from the XMetDB wiki. XMetDB formalizes how biotransformation data should be reported, and the openly available systematically labeled data is a big step forward towards better models for predictive metabolism.
  •  
23.
  • van Rijswijk, Merlijn, et al. (author)
  • The future of metabolomics in ELIXIR.
  • 2017
  • In: F1000 Research. - : F1000 Research Ltd. - 2046-1402. ; 6
  • Journal article (peer-reviewed)abstract
    • Metabolomics, the youngest of the major omics technologies, is supported by an active community of researchers and infrastructure developers across Europe. To coordinate and focus efforts around infrastructure building for metabolomics within Europe, a workshop on the "Future of metabolomics in ELIXIR" was organised at Frankfurt Airport in Germany. This one-day strategic workshop involved representatives of ELIXIR Nodes, members of the PhenoMeNal consortium developing an e-infrastructure that supports workflow-based metabolomics analysis pipelines, and experts from the international metabolomics community. The workshop established metabolite identification as the critical area, where a maximal impact of computational metabolomics and data management on other fields could be achieved. In particular, the existing four ELIXIR Use Cases, where the metabolomics community - both industry and academia - would benefit most, and which could be exhaustively mapped onto the current five ELIXIR Platforms were discussed. This opinion article is a call for support for a new ELIXIR metabolomics Use Case, which aligns with and complements the existing and planned ELIXIR Platforms and Use Cases.
  •  
24.
  • Wagener, Johannes, et al. (author)
  • XMPP for cloud computing in bioinformatics supporting discovery and invocation of asynchronous web services
  • 2009
  • In: BMC Bioinformatics. - : Springer Science and Business Media LLC. - 1471-2105. ; 10, s. 279-
  • Journal article (peer-reviewed)abstract
    • BACKGROUND:Life sciences make heavily use of the web for both data provision and analysis. However, the increasing amount of available data and the diversity of analysis tools call for machine accessible interfaces in order to be effective. HTTP-based Web service technologies, like the Simple Object Access Protocol (SOAP) and REpresentational State Transfer (REST) services, are today the most common technologies for this in bioinformatics. However, these methods have severe drawbacks, including lack of discoverability, and the inability for services to send status notifications. Several complementary workarounds have been proposed, but the results are ad-hoc solutions of varying quality that can be difficult to use. RESULTS:We present a novel approach based on the open standard Extensible Messaging and Presence Protocol (XMPP), consisting of an extension (IO Data) to comprise discovery, asynchronous invocation, and definition of data types in the service. That XMPP cloud services are capable of asynchronous communication implies that clients do not have to poll repetitively for status, but the service sends the results back to the client upon completion. Implementations for Bioclipse and Taverna are presented, as are various XMPP cloud services in bio- and cheminformatics. CONCLUSION:XMPP with its extensions is a powerful protocol for cloud services that demonstrate several advantages over traditional HTTP-based Web services: 1) services are discoverable without the need of an external registry, 2) asynchronous invocation eliminates the need for ad-hoc solutions like polling, and 3) input and output types defined in the service allows for generation of clients on the fly without the need of an external semantics description. The many advantages over existing technologies make XMPP a highly interesting candidate for next generation online services in bioinformatics.
  •  
25.
  •  
26.
  • Williams, Anthony J., et al. (author)
  • Accessing, using, and creating chemical property databases for computational toxicology modeling
  • 2012
  • In: Computational Toxicology. - Totowa, NJ : Springer. ; 929, s. 221-241
  • Book chapter (peer-reviewed)abstract
    • Toxicity data is expensive to generate, is increasingly seen as precompetitive, and is frequently used for the generation of computational models in a discipline known as computational toxicology. Repositories of chemical property data are valuable for supporting computational toxicologists by providing access to data regarding potential toxicity issues with compounds as well as for the purpose of building structure-toxicity relationships and associated prediction models. These relationships use mathematical, statistical, and modeling computational approaches and can be used to understand the mechanisms by which chemicals cause harm and, ultimately, enable prediction of adverse effects of these chemicals to human health and/or the environment. Such approaches are of value as they offer an opportunity to prioritize chemicals for testing. An increasing amount of data used by computational toxicologists is being published into the public domain and, in parallel, there is a greater availability of Open Source software for the generation of computational models. This chapter provides an overview of the types of data and software available and how these may be used to produce predictive toxicology models for the community.
  •  
27.
  • Willighagen, Egon, 1974- (author)
  • 3D Molecular Representations
  • 2010
  • In: Handbook of Chemoinformatics Algorithms. - Boca Raton : CRC Press. - 9781420082920
  • Book chapter (other academic/artistic)
  •  
28.
  • Willighagen, Egon, et al. (author)
  • Computational toxicology using OpenTox & Bioclipse
  • 2012
  • In: Toxicology Letters. - : Elsevier BV. - 0378-4274 .- 1879-3169. ; 211, s. S60-S60
  • Journal article (other academic/artistic)abstract
    • Computational methods, e.g., OECD QSAR Toolbox and ToxPredict, are increasingly used to support chemicals toxicity assessment in academic settings, industry and governments. The in silico-based assessments complement experimental approaches and have potential to fill the knowledge gaps needed to broadly assess chemical hazards. We present here the interoperable Bioclipse–OpenTox platform as a novel alternative made freely and openly available.The interactive Bioclipse software is combined with remote computational toxicity prediction provided by services in the OpenTox network from various European institutes and SMEs. These online services apply machine learning methods for integration of a number of end points, e.g., Ames mutagenicity test in salmonella, Caco-2 cell model permeability and micronucleus assay in rodents. Coupling of such data to chemical structure assessments lead to prediction of site(s) for metabolism (using SMARTCyp), biodegradation (START), and toxicity mode prediction (Verhaar scheme). The OpenTox platform thus unifies how the services are accessed whereas the Bioclipse software provides the easy-to-use interface for interactively studying the toxic part of molecules. Additional predictive methods that are made accessible via the OpenTox network are automatically discovered by Bioclipse and models can be improved over time without any need to reinstall Bioclipse itself.
  •  
29.
  • Willighagen, Egon, et al. (author)
  • Computational toxicology using the OpenTox application programming interface and Bioclipse
  • 2011
  • In: BMC Research Notes. - : Springer Science and Business Media LLC. - 1756-0500. ; 4:1, s. 487-
  • Journal article (peer-reviewed)abstract
    • Background: Toxicity is a complex phenomenon involving the potential adverse effect on a range of biological functions. Predicting toxicity involves using a combination of experimental data (endpoints) and computational methods to generate a set of predictive models. Such models rely strongly on being able to integrate information from many sources. The required integration of biological and chemical information sources requires, however, a common language to express our knowledge ontologically, and interoperating services to build reliable predictive toxicology applications. Findings: This article describes progress in extending the integrative bio- and cheminformatics platform Bioclipse to interoperate with OpenTox, a semantic web framework which supports open data exchange and toxicology model building. The Bioclipse workbench environment enables functionality from OpenTox web services and easy access to OpenTox resources for evaluating toxicity properties of query molecules. Relevant cases and interfaces based on ten neurotoxins are described to demonstrate the capabilities provided to the user. The integration takes advantage of semantic web technologies, thereby providing an open and simplifying communication standard. Additionally, the use of ontologies ensures proper interoperation and reliable integration of toxicity information from both experimental and computational sources. Conclusions: A novel computational toxicity assessment platform was generated from integration of two open science platforms related to toxicology: Bioclipse, that combines a rich scriptable and graphical workbench environment for integration of diverse sets of information sources, and OpenTox, a platform for interoperable toxicology data and computational services. The combination provides improved reliability and operability for handling large data sets by the use of the Open Standards from the OpenTox Application Programming Interface. This enables simultaneous access to a variety of distributed predictive toxicology databases, and algorithm and model resources, taking advantage of the Bioclipse workbench handling the technical layers.
  •  
30.
  •  
31.
  • Willighagen, Egon L, et al. (author)
  • The ChEMBL database as linked open data
  • 2013
  • In: Journal of Cheminformatics. - : Springer Science and Business Media LLC. - 1758-2946. ; 5:1, s. 23-
  • Journal article (peer-reviewed)abstract
    • Background: Making data available as Linked Data using Resource Description Framework (RDF) promotes integration with other web resources. RDF documents can natively link to related data, and others can link back using Uniform Resource Identifiers (URIs). RDF makes thedata machine-readable and uses extensible vocabularies for additional information, making it easier to scale up inference and data analysis. Results: This paper describes recent developments in an ongoing project converting data from the ChEMBL database into RDF triples. Relative to earlier versions, this updated version of ChEMBL-RDF uses recently introduced ontologies, including CHEMINF and CiTO; exposes more information from the database; and is now available as dereferencable,linked data. To demonstrate these new features, we present novel use cases showing further integration with other web resources, including Bio2RDF, Chem2Bio2RDF, and ChemSpider, and showing the use of standard ontologies for querying. Conclusions: We have illustrated the advantages of using open standards and ontologies to link the ChEMBL database to other databases. Using those links and the knowledge encoded in standards and ontologies, theChEMBL-RDF resource creates a foundation for integrated semantic web cheminformatics applications, such as the presented decision support.
  •  
32.
  • Willighagen, Egon L., et al. (author)
  • The Chemistry Development Kit (CDK) v2.0 : atom typing, depiction, molecular formulas, and substructure searching
  • 2017
  • In: Journal of Cheminformatics. - : Springer Science and Business Media LLC. - 1758-2946. ; 9
  • Journal article (peer-reviewed)abstract
    • Background: The Chemistry Development Kit (CDK) is a widely used open source cheminformatics toolkit, providing data structures to represent chemical concepts along with methods to manipulate such structures and perform computations on them. The library implements a wide variety of cheminformatics algorithms ranging from chemical structure canonicalization to molecular descriptor calculations and pharmacophore perception. It is used in drug discovery, metabolomics, and toxicology. Over the last 10 years, the code base has grown significantly, however, resulting in many complex interdependencies among components and poor performance of many algorithms.Results: We report improvements to the CDK v2.0 since the v1.2 release series, specifically addressing the increased functional complexity and poor performance. We first summarize the addition of new functionality, such atom typing and molecular formula handling, and improvement to existing functionality that has led to significantly better performance for substructure searching, molecular fingerprints, and rendering of molecules. Second, we outline how the CDK has evolved with respect to quality control and the approaches we have adopted to ensure stability, including a code review mechanism.Conclusions: This paper highlights our continued efforts to provide a community driven, open source cheminformatics library, and shows that such collaborative projects can thrive over extended periods of time, resulting in a high-quality and performant library. By taking advantage of community support and contributions, we show that an open source cheminformatics project can act as a peer reviewed publishing platform for scientific computing software.
  •  
33.
  • Willighagen, Egon, 1974-, et al. (author)
  • Linking Open Drug Data to Cheminformatics and Proteochemometrics
  • 2010
  • In: SWAT4LS-2009 - Semantic Web Applications and Tools for Life Sciences. - Aachen, Germany : Sun SITE Central Europe.
  • Conference paper (peer-reviewed)abstract
    • Semantic Web technologies have made great steps forwardin data exchange in health care and life sciences in the past years. Thework presented here focuses to a some extent on making drug discoveryrelated data available as RDF, and even more so on the integration ofRDF approaches with data analysis of molecular information in drugdiscovery fields like cheminformatics and proteochemometrics. We hereshow how the chem- and bioinformatics workbench Bioclipse and theChemistry Development Kit can be used to this purpose.Abstract. Semantic Web technologies have made great steps forwardin data exchange in health care and life sciences in the past years. Thework presented here focuses to a some extent on making drug discoveryrelated data available as RDF, and even more so on the integration ofRDF approaches with data analysis of molecular information in drugdiscovery fields like cheminformatics and proteochemometrics. We hereshow how the chem- and bioinformatics workbench Bioclipse and theChemistry Development Kit can be used to this purpose.
  •  
34.
  • Willighagen, Egon, 1974-, et al. (author)
  • Linking the Resource Description Framework to cheminformatics and proteochemometrics
  • 2011
  • In: Journal of Biomedical Semantics. - 2041-1480. ; 2:Suppl 1, s. 6-
  • Journal article (peer-reviewed)abstract
    • BACKGROUND :Semantic web technologies are finding their way into the life sciences. Ontologies and semantic markup have already been used for more than a decade in molecular sciences, but have not found widespread use yet. The semantic web technology Resource Description Framework (RDF) and related methods show to be sufficiently versatile to change that situation.RESULTS :The work presented here focuses on linking RDF approaches to existing molecular chemometrics fields, including cheminformatics, QSAR modeling and proteochemometrics. Applications are presented that link RDF technologies to methods from statistics and cheminformatics, including data aggregation, visualization, chemical identification, and property prediction. They demonstrate how this can be done using various existing RDF standards and cheminformatics libraries. For example, we show how IC50 and Ki values are modeled for a number of biological targets using data from the ChEMBL database.CONCLUSIONS :We have shown that existing RDF standards can suitably be integrated into existing molecular chemometrics methods. Platforms that unite these technologies, like Bioclipse, makes this even simpler and more transparent. Being able to create and share workflows that integrate data aggregation and analysis (visual and statistical) is beneficial to interoperability and reproducibility. The current work shows that RDF approaches are sufficiently powerful to support molecular chemometrics workflows.
  •  
35.
  •  
36.
  • Willighagen, Egon, et al. (author)
  • Userscripts for the Life Sciences
  • 2007
  • In: BMC Bioinformatics. - : Springer Science and Business Media LLC. - 1471-2105. ; 8, s. 487-
  • Journal article (peer-reviewed)abstract
    • Background The web has seen an explosion of chemistry and biology related resources in the last 15 years: thousands of scientific journals, databases, wikis, blogs and resources are available with a wide variety of types of information. There is a huge need to aggregate and organise this information. However, the sheer number of resources makes it unrealistic to link them all in a centralised manner. Instead, search engines to find information in those resources flourish, and formal languages like Resource Description Framework and Web Ontology Language are increasingly used to allow linking of resources. A recent development is the use of userscripts to change the appearance of web pages, by on-the-fly modification of the web content. This opens possibilities to aggregate information and computational results from different web resources into the web page of one of those resources. Results Several userscripts are presented that enrich biology and chemistry related web resources by incorporating or linking to other computational or data sources on the web. The scripts make use of Greasemonkey-like plugins for web browsers and are written in JavaScript. Information from third-party resources are extracted using open Application Programming Interfaces, while common Universal Resource Locator schemes are used to make deep links to related information in that external resource. The userscripts presented here use a variety of techniques and resources, and show the potential of such scripts. Conclusion This paper discusses a number of userscripts that aggregate information from two or more web resources. Examples are shown that enrich web pages with information from other resources, and show how information from web pages can be used to link to, search, and process information in other resources. Due to the nature of userscripts, scientists are able to select those scripts they find useful on a daily basis, as the scripts run directly in their own web browser rather than on the web server. This flexibility allows the scientists to tune the features of web resources to optimise their productivity.
  •  
37.
  • Wohlgemuth, Gert, et al. (author)
  • The Chemical Translation Service (CTS) : a web-based tool to improve standardization of metabolomic reports
  • 2010
  • In: Bioinformatics. - : Oxford University Press (OUP). - 1367-4803 .- 1367-4811. ; 26:20, s. 2647-2648
  • Journal article (peer-reviewed)abstract
    • Metabolomic publications and databases use different database identifiers or even trivial names which disable queries across databases or between studies. The best way to annotate metabolites is by chemical structures, encoded by the International Chemical Identifier code (InChI) or InChIKey. We have implemented a web-based Chemical Translation Service that performs batch conversions of the most common compound identifiers, including CAS, CHEBI, compound formulas, Human Metabolome Database HMDB, InChI, InChIKey, IUPAC name, KEGG, LipidMaps, Pub-Chem CID+SID, SMILES, and chemical synonym names. Batch conversion downloads of 1,410 CIDs are performed in 2.5 minutes. Structures are automatically displayed.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-37 of 37
Type of publication
journal article (26)
book chapter (4)
book (2)
conference paper (2)
research review (2)
doctoral thesis (1)
show more...
show less...
Type of content
peer-reviewed (29)
other academic/artistic (6)
pop. science, debate, etc. (2)
Author/Editor
Willighagen, Egon (24)
Spjuth, Ola, 1977- (16)
Steinbeck, Christoph (8)
Willighagen, Egon L. (8)
Alvarsson, Jonathan (6)
Wikberg, Jarl (6)
show more...
Guha, Rajarshi (6)
Eklund, Martin (5)
Spjuth, Ola (5)
Grafström, Roland (5)
Jeliazkova, Nina (5)
Willighagen, Egon, 1 ... (5)
Berg, Arvid (4)
Carlsson, Lars (3)
Salek, Reza M (3)
Kuhn, Stefan (3)
Hardy, Barry (3)
Williams, Antony J. (3)
Evelo, Chris T. (3)
Wagener, Johannes (3)
Georgiev, Valentin (2)
Neumann, Steffen (2)
Wikberg, Jarl E. S. (2)
Lampa, Samuel (2)
Slobodnik, Jaroslav (2)
Ceder, Rebecca (2)
Rocca-Serra, Philipp ... (2)
Schymanski, Emma L. (2)
Bradley, Jean-Claude (2)
Lang, Andrew (2)
Adams, Samuel (2)
Lapins, Maris (2)
Sansone, Susanna-Ass ... (2)
Schulze, Tobias (2)
Aalizadeh, Reza (2)
Eklund, Martin, 1978 ... (2)
Zdrazil, Barbara (2)
Nymark, Penny (2)
Kohonen, Pekka (2)
Sanz, Ferran (2)
Hastings, Janna (2)
Weber, Ralf J. M. (2)
Jourdan, Fabien (2)
Martens, Marvin (2)
Oberacher, Herbert (2)
Bolton, Evan E. (2)
Ramirez, Noelia (2)
O'Boyle, Noel (2)
Murray-Rust, Peter (2)
Torrance, Gilleain (2)
show less...
University
Uppsala University (32)
Karolinska Institutet (8)
Royal Institute of Technology (2)
Stockholm University (2)
Umeå University (1)
Örebro University (1)
show more...
Linköping University (1)
Swedish University of Agricultural Sciences (1)
show less...
Language
English (37)
Research subject (UKÄ/SCB)
Natural sciences (29)
Medical and Health Sciences (11)
Engineering and Technology (2)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view