SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Andersson Emil) ;pers:(Gustavsson Emil 1987)"

Search: WFRF:(Andersson Emil) > Gustavsson Emil 1987

  • Result 1-2 of 2
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Peuker, Sebastian, et al. (author)
  • Efficient Isotope Editing of Proteins for Site-Directed Vibrational Spectroscopy
  • 2016
  • In: Journal of the American Chemical Society. - : American Chemical Society (ACS). - 0002-7863 .- 1520-5126. ; 138:7, s. 2312-2318
  • Journal article (peer-reviewed)abstract
    • Vibrational spectra contain unique information on protein structure and dynamics. However, this information is often obscured by spectral congestion, and site-selective information is not available. In principle, sites of interest can be spectrally identified by isotope shifts, but site-specific isotope labeling of proteins is today possible only for favorable amino acids or with prohibitively low yields. Here we present an efficient cell-free expression system for the site-specific incorporation of any isotope-labeled amino acid into proteins. We synthesized 1.6 mg of green fluorescent protein with an isotope-labeled tyrosine from 100 mL of cell free reaction extract. We unambiguously identified spectral features of the tyrosine in the fingerprint region of the time-resolved infrared absorption spectra. Kinetic analysis confirmed the existence of an intermediate state between photoexcitation and proton transfer that lives for 3 ps. Our method lifts vibrational spectroscopy of proteins to a higher level of structural specificity.
  •  
2.
  • Önnheim, Magnus, 1985, et al. (author)
  • Reinforcement Learning Informed by Optimal Control
  • 2019
  • In: Lecture Notes in Computer Science. - Cham : Springer International Publishing. - 0302-9743 .- 1611-3349. ; 11731, s. 403-407
  • Conference paper (peer-reviewed)abstract
    • Model-free reinforcement learning has seen tremendous advances in the last few years, however practical applications of pure reinforcement learning are still limited by sample inefficiency and the difficulty of giving robustness and stability guarantees of the proposed agents. Given access to an expert policy, one can increase sample efficiency by in addition to learning from data, and also learn from the experts actions for safer learning. In this paper we pose the question whether expert learning can be accelerated and stabilized if given access to a family of experts which are designed according to optimal control principles, and more specifically, linear quadratic regulators. In particular we consider the nominal model of a system as part of the action space of a reinforcement learning agent. Further, using the nominal controller, we design customized reward functions for training a reinforcement learning agent, and perform ablation studies on a set of simple benchmark problems.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-2 of 2

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view