SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Jeliazkova Nina) "

Search: WFRF:(Jeliazkova Nina)

  • Result 1-13 of 13
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Bhhatarai, Barun, et al. (author)
  • CADASTER QSPR Models for Predictions of Melting and Boiling Points of Perfluorinated Chemicals
  • 2011
  • In: Molecular Informatics. - : John Wiley & Sons. - 1868-1751 .- 1868-1743. ; 30:2-3, s. 189-204
  • Journal article (peer-reviewed)abstract
    • Quantitative structure property relationship (QSPR) studies on per- and polyfluorinated chemicals (PFCs) on melting point (MP) and boiling point (BP) are presented. The training and prediction chemicals used for developing and validating the models were selected from Syracuse PhysProp database and literatures. The available experimental data sets were split in two different ways: a) random selection on response value, and b) structural similarity verified by self-organizing-map (SOM), in order to propose reliable predictive models, developed only on the training sets and externally verified on the prediction sets. Individual linear and non-linear approaches based models developed by different CADASTER partners on 0D-2D Dragon descriptors, E-state descriptors and fragment based descriptors as well as consensus model and their predictions are presented. In addition, the predictive performance of the developed models was verified on a blind external validation set (EV-set) prepared using PERFORCE database on 15 MP and 25 BP data respectively. This database contains only long chain perfluoro-alkylated chemicals, particularly monitored by regulatory agencies like US-EPA and EU-REACH. QSPR models with internal and external validation on two different external prediction/validation sets and study of applicability-domain highlighting the robustness and high accuracy of the models are discussed. Finally, MPs for additional 303 PFCs and BPs for 271 PFCs were predicted for which experimental measurements are unknown.
  •  
2.
  • Brandmaier, Stefan, et al. (author)
  • The QSPR-THESAURUS : The Online Platform of the CADASTER Project
  • 2014
  • In: ATLA (Alternatives to Laboratory Animals). - : SAGE Publications. - 0261-1929. ; 42:1, s. 13-24
  • Journal article (peer-reviewed)abstract
    • The aim of the CADASTER project (CAse Studies on the Development and Application of in Silico Techniques for Environmental Hazard and Risk Assessment) was to exemplify REACH-related hazard assessments for four classes of chemical compound, namely, polybrominated diphenylethers, per and polyfluorinated compounds, (benzo)triazoles, and musks and fragrances. The QSPR-THESAURUS website (http: / /qspr-thesaurus.eu) was established as the project's online platform to upload, store, apply, and also create, models within the project. We overview the main features of the website, such as model upload, experimental design and hazard assessment to support risk assessment, and integration with other web tools, all of which are essential parts of the QSPR-THESAURUS.
  •  
3.
  • Cassani, Stefano, et al. (author)
  • Evaluation of CADASTER QSAR Models for the Aquatic Toxicity of (Benzo)triazoles and Prioritisation by Consensus Prediction
  • 2013
  • In: ATLA (Alternatives to Laboratory Animals). - : SAGE Publications. - 0261-1929. ; 41:1, s. 49-64
  • Journal article (peer-reviewed)abstract
    • QSAR regression models of the toxicity of triazoles and benzotriazoles ([B] TAZs) to an alga (Pseudokirchneriella subcapitata), Daphnia magna and a fish (Onchorhynchus mykiss), were developed by five partners in the FP7-EU Project, CADASTER. The models were developed by different methods - Ordinary Least Squares (OLS), Partial Least Squares (PLS), Bayesian regularised regression and Associative Neural Network (ASNN) - by using various molecular descriptors (DRAGON, PaDEL-Descriptor and QSPR-THESAURUS web). In addition, different procedures were used for variable selection, validation and applicability domain inspection. The predictions of the models developed, as well as those obtained in a consensus approach by averaging the data predicted from each model, were compared with the results of experimental tests that were performed by two CADASTER partners. The individual and consensus models were able to correctly predict the toxicity classes of the chemicals tested in the CADASTER project, confirming the utility of the QSAR approach. The models were also used for the prediction of aquatic toxicity of over 300 (B)TAZs, many of which are included in the REACH pre-registration list, and were without experimental data. This highlights the importance of QSAR models for the screening and prioritisation of untested chemicals, in order to reduce and focus experimental testing.
  •  
4.
  • Hardy, Barry, et al. (author)
  • Toxicology Ontology Perspectives
  • 2012
  • In: ALTEX. Alternatives zu Tierexperimenten. - 0946-7785. ; 29:2, s. 139-156
  • Journal article (peer-reviewed)abstract
    • The field of predictive toxicology requires the development of open, public, computable, standardized toxicology vocabularies and ontologies to support the applications required by in silico, in vitro, and in vivo toxicology methods and related analysis and reporting activities. In this article we review ontology developments based on a set of perspectives showing how ontologies are being used in predictive toxicology initiatives and applications. Perspectives on resources and initiatives reviewed include OpenTox, eTOX, Pistoia Alliance, ToxWiz, Virtual Liver, EU-ADR, BEL, ToxML, and Bioclipse. We also review existing ontology developments in neighboring fields that can contribute to establishing an ontological framework for predictive toxicology. A significant set of resources is already available to provide a foundation for an ontological framework for 21st century mechanistic-based toxicology research. Ontologies such as ToxWiz provide a basis for application to toxicology investigations, whereas other ontologies under development in the biological, chemical, and biomedical communities could be incorporated in an extended future framework. OpenTox has provided a semantic web framework for the implementation of such ontologies into software applications and linked data resources. Bioclipse developers have shown the benefit of interoperability obtained through ontology by being able to link their workbench application with remote OpenTox web services. Although these developments are promising, an increased international coordination of efforts is greatly needed to develop a more unified, standardized, and open toxicology ontology framework.
  •  
5.
  • Harry, Barry, et al. (author)
  • Food for thought... : A toxicology ontology roadmap
  • 2012
  • In: ALTEX. Alternatives zu Tierexperimenten. - 0946-7785. ; 29:2, s. 129-137
  • Journal article (peer-reviewed)abstract
    • Foreign substances can have a dramatic and unpredictable adverse effect on human health. In the development of new therapeutic agents, it is essential that the potential adverse effects of all candidates be identified as early as possible. The field of predictive toxicology strives to profile the potential for adverse effects of novel chemical substances before they occur, both with traditional in vivo experimental approaches and increasingly through the development of in vitro and computational methods which can supplement and reduce the need for animal testing. To be maximally effective, the field needs access to the largest possible knowledge base of previous toxicology findings, and such results need to be made available in such a fashion so as to be interoperable, comparable, and compatible with standard toolkits. This necessitates the development of open, public, computable, and standardized toxicology vocabularies and ontologies so as to support the applications required by in silico, in vitro, and in vivo toxicology methods and related analysis and reporting activities. Such ontology development will support data management, model building, integrated analysis, validation and reporting, including regulatory reporting and alternative testing submission requirements as required by guidelines such as the REACH legislation, leading to new scientific advances in a mechanistically-based predictive toxicology. Numerous existing ontology and standards initiatives can contribute to the creation of a toxicology ontology supporting the needs of predictive toxicology and risk assessment. Additionally, new ontologies are needed to satisfy practical use cases and scenarios where gaps currently exist. Developing and integrating these resources will require a well-coordinated and sustained effort across numerous stakeholders engaged in a public-private partnership. In this communication, we set out a roadmap for the development of an integrated toxicology ontology, harnessing existing resources where applicable. We describe the stakeholders’ requirements analysis from the academic and industry perspectives, timelines, and expected benefits of this initiative, with a view to engagement with the wider community.
  •  
6.
  • Honma, Masamitsu, et al. (author)
  • Improvement of quantitative structure-activity relationship (QSAR) tools for predicting Ames mutagenicity : outcomes of the Ames/QSAR International Challenge Project
  • 2019
  • In: Mutagenesis. - Oxford : Oxford University Press (OUP). - 0267-8357 .- 1464-3804. ; 34:1, s. 3-16
  • Journal article (peer-reviewed)abstract
    • The International Conference on Harmonization (ICH) M7 guideline allows the use of in silico approaches for predicting Ames mutagenicity for the initial assessment of impurities in pharmaceuticals. This is the first international guideline that addresses the use of quantitative structure-activity relationship (QSAR) models in lieu of actual toxicological studies for human health assessment. Therefore, QSAR models for Ames mutagenicity now require higher predictive power for identifying mutagenic chemicals. To increase the predictive power of QSAR models, larger experimental datasets from reliable sources are required. The Division of Genetics and Mutagenesis, National Institute of Health Sciences (DGM/NIHS) of Japan recently established a unique proprietary Ames mutagenicity database containing 12140 new chemicals that have not been previously used for developing QSAR models. The DGM/NIHS provided this Ames database to QSAR vendors to validate and improve their QSAR tools. The Ames/QSAR International Challenge Project was initiated in 2014 with 12 QSAR vendors testing 17 QSAR tools against these compounds in three phases. We now present the final results. All tools were considerably improved by participation in this project. Most tools achieved >50% sensitivity (positive prediction among all Ames positives) and predictive power (accuracy) was as high as 80%, almost equivalent to the inter-laboratory reproducibility of Ames tests. To further increase the predictive power of QSAR tools, accumulation of additional Ames test data is required as well as re-evaluation of some previous Ames test results. Indeed, some Ames-positive or Ames-negative chemicals may have previously been incorrectly classified because of methodological weakness, resulting in false-positive or false-negative predictions by QSAR tools. These incorrect data hamper prediction and are a source of noise in the development of QSAR models. It is thus essential to establish a large benchmark database consisting only of well-validated Ames test results to build more accurate QSAR models.
  •  
7.
  • Mansouri, Kamel, et al. (author)
  • CoMPARA : Collaborative Modeling Project for Androgen Receptor Activity
  • 2020
  • In: Journal of Environmental Health Perspectives. - 0091-6765 .- 1552-9924. ; 128:2, s. 1-17
  • Journal article (peer-reviewed)abstract
    • BACKGROUND: Endocrine disrupting chemicals (EDCs) are xenobiotics that mimic the interaction of natural hormones and alter synthesis, transport, or metabolic pathways. The prospect of EDCs causing adverse health effects in humans and wildlife has led to the development of scientific and regulatory approaches for evaluating bioactivity. This need is being addressed using high-throughput screening (HTS) in vitro approaches and computational modeling.OBJECTIVES: In support of the Endocrine Disruptor Screening Program, the U.S. Environmental Protection Agency (EPA) led two worldwide consortiums to virtually screen chemicals for their potential estrogenic and androgenic activities. Here, we describe the Collaborative Modeling Project for Androgen Receptor Activity (CoMPARA) efforts, which follows the steps of the Collaborative Estrogen Receptor Activity Prediction Project (CERAPP).METHODS: The CoMPARA list of screened chemicals built on CERAPP's list of 32,464 chemicals to include additional chemicals of interest, as well as simulated ToxCast (TM) metabolites, totaling 55,450 chemical structures. Computational toxicology scientists from 25 international groups contributed 91 predictive models for binding, agonist, and antagonist activity predictions. Models were underpinned by a common training set of 1,746 chemicals compiled from a combined data set of 11 ToxCast (TM)/Tox21 HTS in vitro assays.RESULTS: The resulting models were evaluated using curated literature data extracted from different sources. To overcome the limitations of single-model approaches, CoMPARA predictions were combined into consensus models that provided averaged predictive accuracy of approximately 80% for the evaluation set.DISCUSSION: The strengths and limitations of the consensus predictions were discussed with example chemicals; then, the models were implemented into the free and open-source OPERA application to enable screening of new chemicals with a defined applicability domain and accuracy assessment. This implementation was used to screen the entire EPA DSSTox database of similar to 875,000 chemicals, and their predicted AR activities have been made available on the EPA CompTox Chemicals dashboard and National Toxicology Program's Integrated Chemical Environment.
  •  
8.
  • Martens, Marvin, et al. (author)
  • ELIXIR and Toxicology : a community in development
  • 2021
  • In: F1000 Research. - : F1000 Research Ltd. - 2046-1402. ; 10, s. 1129-1129
  • Journal article (peer-reviewed)abstract
    • Toxicology has been an active research field for many decades, with academic, industrial and government involvement. Modern omics and computational approaches are changing the field, from merely disease-specific observational models into target-specific predictive models. Traditionally, toxicology has strong links with other fields such as biology, chemistry, pharmacology and medicine. With the rise of synthetic and new engineered materials, alongside ongoing prioritisation needs in chemical risk assessment for existing chemicals, early predictive evaluations are becoming of utmost importance to both scientific and regulatory purposes. ELIXIR is an intergovernmental organisation that brings together life science resources from across Europe. To coordinate the linkage of various life science efforts around modern predictive toxicology, the establishment of a new ELIXIR Community is seen as instrumental. In the past few years, joint efforts, building on incidental overlap, have been piloted in the context of ELIXIR. For example, the EU-ToxRisk, diXa, HeCaToS, transQST, and the nanotoxicology community have worked with the ELIXIR TeSS, Bioschemas, and Compute Platforms and activities. In 2018, a core group of interested parties wrote a proposal, outlining a sketch of what this new ELIXIR Toxicology Community would look like. A recent workshop (held September 30th to October 1st, 2020) extended this into an ELIXIR Toxicology roadmap and a shortlist of limited investment-high gain collaborations to give body to this new community. This Whitepaper outlines the results of these efforts and defines our vision of the ELIXIR Toxicology Community and how it complements other ELIXIR activities.  
  •  
9.
  • O'Boyle, Noel, et al. (author)
  • Open Data, Open Source and Open Standards in chemistry : The Blue Obelisk five years on
  • 2011
  • In: Journal of Cheminformatics. - : BioMed Central. - 1758-2946. ; 3, s. 37-
  • Journal article (peer-reviewed)abstract
    • Background: The Blue Obelisk movement was established in 2005 as a response to the lack of Open Data,Open Standards and Open Source (ODOSOS) in chemistry. It aims to make it easier to carry out chemistryresearch by promoting interoperability between chemistry software, encouraging cooperation between OpenSource developers, and developing community resources and Open Standards. Results: This contribution looks back on the work carried out by the Blue Obelisk in the past 5 years and surveysprogress and remaining challenges in the areas of Open Data, Open Standards, and Open Source in chemistry. Conclusions: We show that the Blue Obelisk has been very successful in bringing together researchers anddevelopers with common interests in ODOSOS, leading to development of many useful resources freely availableto the chemistry community
  •  
10.
  •  
11.
  • Spjuth, Ola, 1977-, et al. (author)
  • XMetDB : an open access database for xenobiotic metabolism
  • 2016
  • In: Journal of Cheminformatics. - : Springer Science and Business Media LLC. - 1758-2946. ; 8
  • Journal article (peer-reviewed)abstract
    • Xenobiotic metabolism is an active research topic but the limited amount of openly available high-quality biotransformation data constrains predictive modeling. Current database often default to commonly available information: which enzyme metabolizes a compound, but neither experimental conditions nor the atoms that undergo metabolization are captured. We present XMetDB, an open access database for drugs and other xenobiotics and their respective metabolites. The database contains chemical structures of xenobiotic biotransformations with substrate atoms annotated as reaction centra, the resulting product formed, and the catalyzing enzyme, type of experiment, and literature references. Associated with the database is a web interface for the submission and retrieval of experimental metabolite data for drugs and other xenobiotics in various formats, and a web API for programmatic access is also available. The database is open for data deposition, and a curation scheme is in place for quality control. An extensive guide on how to enter experimental data into is available from the XMetDB wiki. XMetDB formalizes how biotransformation data should be reported, and the openly available systematically labeled data is a big step forward towards better models for predictive metabolism.
  •  
12.
  • Willighagen, Egon, et al. (author)
  • Computational toxicology using the OpenTox application programming interface and Bioclipse
  • 2011
  • In: BMC Research Notes. - : Springer Science and Business Media LLC. - 1756-0500. ; 4:1, s. 487-
  • Journal article (peer-reviewed)abstract
    • Background: Toxicity is a complex phenomenon involving the potential adverse effect on a range of biological functions. Predicting toxicity involves using a combination of experimental data (endpoints) and computational methods to generate a set of predictive models. Such models rely strongly on being able to integrate information from many sources. The required integration of biological and chemical information sources requires, however, a common language to express our knowledge ontologically, and interoperating services to build reliable predictive toxicology applications. Findings: This article describes progress in extending the integrative bio- and cheminformatics platform Bioclipse to interoperate with OpenTox, a semantic web framework which supports open data exchange and toxicology model building. The Bioclipse workbench environment enables functionality from OpenTox web services and easy access to OpenTox resources for evaluating toxicity properties of query molecules. Relevant cases and interfaces based on ten neurotoxins are described to demonstrate the capabilities provided to the user. The integration takes advantage of semantic web technologies, thereby providing an open and simplifying communication standard. Additionally, the use of ontologies ensures proper interoperation and reliable integration of toxicity information from both experimental and computational sources. Conclusions: A novel computational toxicity assessment platform was generated from integration of two open science platforms related to toxicology: Bioclipse, that combines a rich scriptable and graphical workbench environment for integration of diverse sets of information sources, and OpenTox, a platform for interoperable toxicology data and computational services. The combination provides improved reliability and operability for handling large data sets by the use of the Open Standards from the OpenTox Application Programming Interface. This enables simultaneous access to a variety of distributed predictive toxicology databases, and algorithm and model resources, taking advantage of the Bioclipse workbench handling the technical layers.
  •  
13.
  • Willighagen, Egon L., et al. (author)
  • The Chemistry Development Kit (CDK) v2.0 : atom typing, depiction, molecular formulas, and substructure searching
  • 2017
  • In: Journal of Cheminformatics. - : Springer Science and Business Media LLC. - 1758-2946. ; 9
  • Journal article (peer-reviewed)abstract
    • Background: The Chemistry Development Kit (CDK) is a widely used open source cheminformatics toolkit, providing data structures to represent chemical concepts along with methods to manipulate such structures and perform computations on them. The library implements a wide variety of cheminformatics algorithms ranging from chemical structure canonicalization to molecular descriptor calculations and pharmacophore perception. It is used in drug discovery, metabolomics, and toxicology. Over the last 10 years, the code base has grown significantly, however, resulting in many complex interdependencies among components and poor performance of many algorithms.Results: We report improvements to the CDK v2.0 since the v1.2 release series, specifically addressing the increased functional complexity and poor performance. We first summarize the addition of new functionality, such atom typing and molecular formula handling, and improvement to existing functionality that has led to significantly better performance for substructure searching, molecular fingerprints, and rendering of molecules. Second, we outline how the CDK has evolved with respect to quality control and the approaches we have adopted to ensure stability, including a code review mechanism.Conclusions: This paper highlights our continued efforts to provide a community driven, open source cheminformatics library, and shows that such collaborative projects can thrive over extended periods of time, resulting in a high-quality and performant library. By taking advantage of community support and contributions, we show that an open source cheminformatics project can act as a peer reviewed publishing platform for scientific computing software.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-13 of 13

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view