SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:2632 2153 "

Sökning: L773:2632 2153

  • Resultat 1-10 av 23
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Chang, Yu-Wei, et al. (författare)
  • Neural network training with highly incomplete medical datasets
  • 2022
  • Ingår i: Machine Learning-Science and Technology. - : IOP Publishing. - 2632-2153. ; 3:3
  • Tidskriftsartikel (refereegranskat)abstract
    • Neural network training and validation rely on the availability of large high-quality datasets. However, in many cases only incomplete datasets are available, particularly in health care applications, where each patient typically undergoes different clinical procedures or can drop out of a study. Since the data to train the neural networks need to be complete, most studies discard the incomplete datapoints, which reduces the size of the training data, or impute the missing features, which can lead to artifacts. Alas, both approaches are inadequate when a large portion of the data is missing. Here, we introduce GapNet, an alternative deep-learning training approach that can use highly incomplete datasets without overfitting or introducing artefacts. First, the dataset is split into subsets of samples containing all values for a certain cluster of features. Then, these subsets are used to train individual neural networks. Finally, this ensemble of neural networks is combined into a single neural network whose training is fine-tuned using all complete datapoints. Using two highly incomplete real-world medical datasets, we show that GapNet improves the identification of patients with underlying Alzheimer's disease pathology and of patients at risk of hospitalization due to Covid-19. Compared to commonly used imputation methods, this improvement suggests that GapNet can become a general tool to handle incomplete medical datasets.
  •  
2.
  • Strong, Giles C, et al. (författare)
  • TomOpt: differential optimisation for task- and constraint-aware design of particle detectors in the context of muon tomography
  • 2024
  • Ingår i: Machine Learning. - : Institute of Physics (IOP). - 2632-2153. ; 5:3
  • Tidskriftsartikel (refereegranskat)abstract
    • We describe a software package, TomOpt, developed to optimise the geometrical layout and specifications of detectors designed for tomography by scattering of cosmic-ray muons. The software exploits differentiable programming for the modeling of muon interactions with detectors and scanned volumes, the inference of volume properties, and the optimisation cycle performing the loss minimisation. In doing so, we provide the first demonstration of end-to-end-differentiable and inference-aware optimisation of particle physics instruments. We study the performance of the software on a relevant benchmark scenario and discuss its potential applications. Our code is available on Github (Strong et al 2024 available at: https://github.com/GilesStrong/tomopt).
  •  
3.
  • Aarrestad, Thea, et al. (författare)
  • Fast convolutional neural networks on FPGAs with hls4ml
  • 2021
  • Ingår i: Machine Learning: Science and Technology. - : IOP Publishing. - 2632-2153. ; 2:4
  • Tidskriftsartikel (refereegranskat)abstract
    • We introduce an automated tool for deploying ultra low-latency, low-power deep neural networks with convolutional layers on field-programmable gate arrays (FPGAs). By extending the hls4ml library, we demonstrate an inference latency of 5 mu s using convolutional architectures, targeting microsecond latency applications like those at the CERN Large Hadron Collider. Considering benchmark models trained on the Street View House Numbers Dataset, we demonstrate various methods for model compression in order to fit the computational constraints of a typical FPGA device used in trigger and data acquisition systems of particle detectors. In particular, we discuss pruning and quantization-aware training, and demonstrate how resource utilization can be significantly reduced with little to no loss in model accuracy. We show that the FPGA critical resource consumption can be reduced by 97% with zero loss in model accuracy, and by 99% when tolerating a 6% accuracy degradation.
  •  
4.
  • Ahlberg Gagnér, Viktor, et al. (författare)
  • Estimating the probability of coincidental similarity between atomic displacement parameters with machine learning
  • 2021
  • Ingår i: Machine Learning-Science and Technology. - : IOP Publishing. - 2632-2153. ; 2:3
  • Tidskriftsartikel (refereegranskat)abstract
    • High-resolution diffraction studies of macromolecules incorporate the tensor form of the anisotropic displacement parameter (ADP) of atoms from their mean position. The comparison of these parameters requires a statistical framework that can handle the experimental and modeling errors linked to structure determination. Here, a Bayesian machine learning model is introduced that approximates ADPs with the random Wishart distribution. This model allows for the comparison of random samples from a distribution that is trained on experimental structures. The comparison revealed that the experimental similarity between atoms is larger than predicted by the random model for a substantial fraction of the comparisons. Different metrics between ADPs were evaluated and categorized based on how useful they are at detecting non-accidental similarity and whether they can be replaced by other metrics. The most complementary comparisons were provided by Euclidean, Riemann and Wasserstein metrics. The analysis of ADP similarity and the positional distance of atoms in bovine trypsin revealed a set of atoms with striking ADP similarity over a long physical distance, and generally the physical distance between atoms and their ADP similarity do not correlate strongly. A substantial fraction of long- and short-range ADP similarities does not form by coincidence and are reproducibly observed in different crystal structures of the same protein.
  •  
5.
  • Balabanov, Oleksandr, et al. (författare)
  • Unsupervised interpretable learning of topological indices invariant under permutations of atomic bands
  • 2021
  • Ingår i: Machine Learning. - : IOP Publishing. - 2632-2153. ; 2:2
  • Tidskriftsartikel (refereegranskat)abstract
    • Multi-band insulating Bloch Hamiltonians with internal or spatial symmetries, such as particle-hole or inversion, may have topologically disconnected sectors of trivial atomic-limit (momentum-independent) Hamiltonians. We present a neural-network-based protocol for finding topologically relevant indices that are invariant under transformations between such trivial atomic-limit Hamiltonians, thus corresponding to the standard classification of band insulators. The work extends the method of 'topological data augmentation' for unsupervised learning introduced (2020 Phys. Rev. Res. 2 013354) by also generalizing and simplifying the data generation scheme and by introducing a special 'mod' layer of the neural network appropriate for Z ( n ) classification. Ensembles of training data are generated by deforming seed objects in a way that preserves a discrete representation of continuity. In order to focus the learning on the topologically relevant indices, prior to the deformation procedure we stack the seed Bloch Hamiltonians with a complete set of symmetry-respecting trivial atomic bands. The obtained datasets are then used for training an interpretable neural network specially designed to capture the topological properties by learning physically relevant momentum space quantities, even in crystalline symmetry classes.
  •  
6.
  • Erbin, Harold, et al. (författare)
  • Deep multi-task mining Calabi-Yau four-folds
  • 2022
  • Ingår i: Machine Learning. - : IOP Publishing Ltd. - 2632-2153. ; 3:1
  • Tidskriftsartikel (refereegranskat)abstract
    • We continue earlier efforts in computing the dimensions of tangent space cohomologies of Calabi-Yau manifolds using deep learning. In this paper, we consider the dataset of all Calabi-Yau four-folds constructed as complete intersections in products of projective spaces. Employing neural networks inspired by state-of-the-art computer vision architectures, we improve earlier benchmarks and demonstrate that all four non-trivial Hodge numbers can be learned at the same time using a multi-task architecture. With 30% (80%) training ratio, we reach an accuracy of 100% for h((1,1)) h((2,1)) (100% for both), 81% (96%) for h((3,1)), and 49% (83%) for h((2,2)). Assuming that the Euler number is known, as it is easy to compute, and taking into account the linear constraint arising from index computations, we get 100% total accuracy.
  •  
7.
  • Ghielmetti, N., et al. (författare)
  • Real-time semantic segmentation on FPGAs for autonomous vehicles with hls4ml
  • 2022
  • Ingår i: Machine Learning - Science and Technology. - : IOP Publishing. - 2632-2153. ; 3:4
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, we investigate how field programmable gate arrays can serve as hardware accelerators for real-time semantic segmentation tasks relevant for autonomous driving. Considering compressed versions of the ENet convolutional neural network architecture, we demonstrate a fully-on-chip deployment with a latency of 4.9 ms per image, using less than 30% of the available resources on a Xilinx ZCU102 evaluation board. The latency is reduced to 3 ms per image when increasing the batch size to ten, corresponding to the use case where the autonomous vehicle receives inputs from multiple cameras simultaneously. We show, through aggressive filter reduction and heterogeneous quantization-aware training, and an optimized implementation of convolutional layers, that the power consumption and resource utilization can be significantly reduced while maintaining accuracy on the Cityscapes dataset.
  •  
8.
  • Heinen, Stefan, et al. (författare)
  • Reducing training data needs with minimal multilevel machine learning (M3L)
  • 2024
  • Ingår i: Machine Learning. - : Institute of Physics Publishing (IOPP). - 2632-2153. ; 5:2
  • Tidskriftsartikel (refereegranskat)abstract
    • For many machine learning applications in science, data acquisition, not training, is the bottleneck even when avoiding experiments and relying on computation and simulation. Correspondingly, and in order to reduce cost and carbon footprint, training data efficiency is key. We introduce minimal multilevel machine learning (M3L) which optimizes training data set sizes using a loss function at multiple levels of reference data in order to minimize a combination of prediction error with overall training data acquisition costs (as measured by computational wall-times). Numerical evidence has been obtained for calculated atomization energies and electron affinities of thousands of organic molecules at various levels of theory including HF, MP2, DLPNO-CCSD(T), DFHFCABS, PNOMP2F12, and PNOCCSD(T)F12, and treating them with basis sets TZ, cc-pVTZ, and AVTZ-F12. Our M3L benchmarks for reaching chemical accuracy in distinct chemical compound sub-spaces indicate substantial computational cost reductions by factors of ∼1.01, 1.1, 3.8, 13.8, and 25.8 when compared to heuristic sub-optimal multilevel machine learning (M2L) for the data sets QM7b, QM9LCCSD (T), Electrolyte Genome Project, QM9CACESD(T), and QM9CECASD(T), respectively. Furthermore, we use M2L to investigate the performance for 76 density functionals when used within multilevel learning and building on the following levels drawn from the hierarchy of Jacobs Ladder: LDA, GGA, mGGA, and hybrid functionals. Within M2L and the molecules considered, mGGAs do not provide any noticeable advantage over GGAs. Among the functionals considered and in combination with LDA, the three on average top performing GGA and Hybrid levels for atomization energies on QM9 using M3L correspond respectively to PW91, KT2, B97D, and τ-HCTH, B3LYP*(VWN5), and TPSSH.
  •  
9.
  • Irwin, Ross, et al. (författare)
  • Chemformer : a pre-trained transformer for computational chemistry
  • 2022
  • Ingår i: Machine Learning. - : IOP Publishing Ltd. - 2632-2153. ; 3:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Transformer models coupled with a simplified molecular line entry system (SMILES) have recently proven to be a powerful combination for solving challenges in cheminformatics. These models, however, are often developed specifically for a single application and can be very resource-intensive to train. In this work we present the Chemformer model-a Transformer-based model which can be quickly applied to both sequence-to-sequence and discriminative cheminformatics tasks. Additionally, we show that self-supervised pre-training can improve performance and significantly speed up convergence on downstream tasks. On direct synthesis and retrosynthesis prediction benchmark datasets we publish state-of-the-art results for top-1 accuracy. We also improve on existing approaches for a molecular optimisation task and show that Chemformer can optimise on multiple discriminative tasks simultaneously. Models, datasets and code will be made available after publication.
  •  
10.
  • Knijff, Lisanne, et al. (författare)
  • Machine learning inference of molecular dipole moment in liquid water
  • 2021
  • Ingår i: Machine Learning. - : IOP Publishing. - 2632-2153. ; 2:3
  • Tidskriftsartikel (refereegranskat)abstract
    • Molecular dipole moment in liquid water is an intriguing property, partly due to the fact that there is no unique way to partition the total electron density into individual molecular contributions. The prevailing method to circumvent this problem is to use maximally localized Wannier functions, which perform a unitary transformation of the occupied molecular orbitals by minimizing the spread function of Boys. Here we revisit this problem using a data-driven approach satisfying two physical constraints, namely: (a) The displacement of the atomic charges is proportional to the Berry phase polarization; (b) Each water molecule has a formal charge of zero. It turns out that the distribution of molecular dipole moments in liquid water inferred from latent variables is surprisingly similar to that obtained from maximally localized Wannier functions. Apart from putting a maximum-likelihood footnote to the established method, this work highlights the capability of graph convolution based charge models and the importance of physical constraints on improving the model interpretability.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 23
Typ av publikation
tidskriftsartikel (23)
Typ av innehåll
refereegranskat (22)
övrigt vetenskapligt/konstnärligt (1)
Författare/redaktör
aut (2)
Petersson, Christoff ... (2)
Volpe, Giovanni, 197 ... (2)
Mehlig, Bernhard, 19 ... (2)
Schneider, Robin (2)
Linander, Hampus, 19 ... (2)
visa fler...
Aarrestad, Thea (1)
Ngadiuba, Jennifer (1)
Pierini, Maurizio (1)
Loncar, Vladimir (1)
Ghielmetti, Nicolo (1)
Summers, Sioni (1)
Linander, Hampus (1)
Iiyama, Yutaro (1)
Di Guglielmo, Giusep ... (1)
Duarte, Javier (1)
Harris, Philip (1)
Rankin, Dylan (1)
Jindariani, Sergo (1)
Pedro, Kevin (1)
Nhan Tran, (1)
Liu, Mia (1)
Kreinar, Edward (1)
Wu, Zhenbin (1)
Hoang, Duc (1)
Nord, Brian (1)
Abdallah, H (1)
Eklund, Anders (1)
Hermansson, Kersti, ... (1)
Dorigo, Tommaso (1)
Kieseler, Jan (1)
Pontzen, Andrew (1)
Engkvist, Ola (1)
Katona, Gergely, 197 ... (1)
Ahlberg Gagnér, Vikt ... (1)
Jensen, Maja, 1978 (1)
Pereira, Joana B. (1)
Lundh, Torbjörn, 196 ... (1)
Börjesson, Karl, 198 ... (1)
Romeo, Stefano, 1976 (1)
Giammanco, Andrea (1)
Peiris, Hiranya V. (1)
Bhowmik, Arghya (1)
Larfors, Magdalena, ... (1)
Lukas, Andre (1)
Zhang, Chao (1)
Granath, Mats, 1972 (1)
Harris, P. (1)
Arismendi Arrieta, D ... (1)
Viguera Diez, Juan, ... (1)
visa färre...
Lärosäte
Göteborgs universitet (9)
Chalmers tekniska högskola (7)
Uppsala universitet (4)
Stockholms universitet (3)
Linköpings universitet (2)
Karolinska Institutet (2)
visa fler...
Luleå tekniska universitet (1)
Lunds universitet (1)
visa färre...
Språk
Engelska (23)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (20)
Teknik (6)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy