SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Olmin Amanda) "

Sökning: WFRF:(Olmin Amanda)

  • Resultat 1-7 av 7
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Beal, Jacob, et al. (författare)
  • Robust estimation of bacterial cell count from optical density
  • 2020
  • Ingår i: Communications Biology. - : Springer Science and Business Media LLC. - 2399-3642. ; 3:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data.
  •  
2.
  • Govindarajan, Hariprasath, et al. (författare)
  • Self-Supervised Representation Learning for Content Based Image Retrieval of Complex Scenes
  • 2021
  • Ingår i: IEEE Intelligent Vehicles Symposium, Proceedings. - : IEEE. - 9781665479219 - 9781665479226 ; , s. 249-256
  • Konferensbidrag (refereegranskat)abstract
    • Although Content Based Image Retrieval (CBIR) is an active research field, application to images simultaneously containing multiple objects has received limited research inter- est. For such complex images, it is difficult to precisely convey the query intention, to encode all the image aspects into one compact global feature representation and to unambiguously define label similarity or dissimilarity. Motivated by the recent success on many visual benchmark tasks, we propose a self- supervised method to train a feature representation learning model. We propose usage of multiple query images, and use an attention based architecture to extract features from diverse image aspects that benefits from this. The method shows promising performance on road scene datasets, and, consistently improves when multiple query images are used instead of a single query image. © 2021 IEEE.
  •  
3.
  • Lindqvist, Jakob, 1992, et al. (författare)
  • A General Framework for Ensemble Distribution Distillation
  • 2020
  • Ingår i: 2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP). - : IEEE. - 9781728166629 ; 2020-September
  • Konferensbidrag (refereegranskat)abstract
    • Ensembles of neural networks have shown to give better predictive performance and more reliable uncertainty estimates than individual networks. Additionally, ensembles allow the uncertainty to be decomposed into aleatoric (data) and epistemic (model) components, giving a more complete picture of the predictive uncertainty. Ensemble distillation is the process of compressing an ensemble into a single model, often resulting in a leaner model that still outperforms the individual ensemble members. Unfortunately, standard distillation erases the natural uncertainty decomposition of the ensemble. We present a general framework for distilling both regression and classification ensembles in a way that preserves the decomposition. We demonstrate the desired behaviour of our framework and show that its predictive performance is on par with standard distillation.
  •  
4.
  • Lindqvist, Jakob, 1992, et al. (författare)
  • Generalised Active Learning With Annotation Quality Selection
  • 2023
  • Ingår i: IEEE International Workshop on Machine Learning for Signal Processing, MLSP. - 2161-0371 .- 2161-0363. ; 2023-September
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we promote a general formulation of active learning (AL), wherein the typically binary decision to annotate a point or not is extended to selecting the qualities with which the points should be annotated. By linking the annotation quality to the cost of acquiring the label, we can trade a lower quality for a larger set of training samples, which may improve learning for the same annotation cost. To investigate this AL formulation, we introduce a concrete criterion, based on the mutual information (MI) between model parameters and noisy labels, for selecting annotation qualities for the entire dataset, before any labels are acquired. We illustrate the usefulness of our formulation with examples for both classification and regression and find that MI is a good candidate for a criterion, but its complexity limits its usefulness.
  •  
5.
  • Olmin, Amanda, 1994-, et al. (författare)
  • Active Learning with Weak Supervision for Gaussian Processes
  • 2023
  • Ingår i: Communications in Computer and Information Science. - Singapore : Springer Nature. - 1865-0937 .- 1865-0929. ; 1792 CCIS, s. 195-204
  • Konferensbidrag (refereegranskat)abstract
    • Annotating data for supervised learning can be costly. When the annotation budget is limited, active learning can be used to select and annotate those observations that are likely to give the most gain in model performance. We propose an active learning algorithm that, in addition to selecting which observation to annotate, selects the precision of the annotation that is acquired. Assuming that annotations with low precision are cheaper to obtain, this allows the model to explore a larger part of the input space, with the same annotation budget. We build our acquisition function on the previously proposed BALD objective for Gaussian Processes, and empirically demonstrate the gains of being able to adjust the annotation precision in the active learning loop.
  •  
6.
  • Olmin, Amanda, 1994- (författare)
  • On Uncertainty Quantification in Neural Networks: Ensemble Distillation and Weak Supervision
  • 2022
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Machine learning models are employed in several aspects of society, ranging from autonomous cars to justice systems. They affect your everyday life, for instance through recommendations on your streaming service and by informing decisions in healthcare, and are expected to have even more influence in society in the future. Among these machine learning models, we find neural networks which have had a wave of success within a wide range of fields in recent years. The success of neural networks are partly attributed to the very flexible model structure and, what it seems, endless possibilities in terms of extensions.While neural networks come with great flexibility, they are so called black-box models and therefore offer little in terms of interpretability. In other words, it is seldom possible to explain or even understand why a neural network makes a certain decision. On top of this, these models are known to be overconfident, which means that they attribute low uncertainty to their predictions, even when uncertainty is, in reality, high. Previous work has demonstrated how this issue can be alleviated with the help of ensembles, i.e. by weighing the opinion of multiple models in prediction. In Paper I, we investigate this possibility further by creating a general framework for ensemble distribution distillation, developed for the purpose of preserving the performance benefits of ensembles while reducing computational costs. Specifically, we extend ensemble distribution distillation to make it applicable to tasks beyond classification and demonstrate the usefulness of the framework in, for example, out-of-distribution detection.Another obstacle in the use of neural networks, especially deep neural networks, is that supervised training of these models can require a large amount of labelled data. The process of annotating a large amount of data is costly, time-consuming and also prone to errors. Specifically, there is a risk of incorporating label noise in the data. In Paper II, we investigate the effect of label noise on model performance. In particular, under an input-dependent noise model, we analyse the properties of the asymptotic risk minimisers of strictly proper and a set of previously proposed, robust loss functions. The results demonstrate that reliability, in terms of a model’s uncertainty estimates, is an important aspect to consider also in weak supervision and, particularly, when developing noise-robust training algorithms.Related to annotation costs in supervised learning, is the use of active learning to optimise model performance under budget constraints. The goal of active learning, in this context, is to identify and annotate the observations that are most useful for the model’s performance. In Paper III, we propose an approach for taking advantage of intentionally weak annotations in active learning. What is proposed, more specifically, is to incorporate the possibility to collect cheaper, but noisy, annotations in the active learning algorithm. Thus, the same annotation budget is enough to annotate more data points for training. In turn, the model gets to explore a larger part of the input space. We demonstrate empirically how this can lead to gains in model performance.
  •  
7.
  • Olmin, Amanda, 1994-, et al. (författare)
  • Robustness and Reliability When Training With Noisy Labels
  • 2022
  • Ingår i: Proceedings of the 25th International Conference on Artificial Intelligence and Statistics (AISTATS) 2022. - : JMLR. ; , s. 922-942
  • Konferensbidrag (refereegranskat)abstract
    • Labelling of data for supervised learning canbe costly and time-consuming and the riskof incorporating label noise in large data setsis imminent. When training a flexible discriminative model using a strictly proper loss,such noise will inevitably shift the solution towards the conditional distribution over noisylabels. Nevertheless, while deep neural networks have proven capable of fitting randomlabels, regularisation and the use of robustloss functions empirically mitigate the effectsof label noise. However, such observationsconcern robustness in accuracy, which is insufficient if reliable uncertainty quantificationis critical. We demonstrate this by analysingthe properties of the conditional distributionover noisy labels for an input-dependent noisemodel. In addition, we evaluate the set ofrobust loss functions characterised by noiseinsensitive, asymptotic risk minimisers. Wefind that strictly proper and robust loss functions both offer asymptotic robustness in accuracy, but neither guarantee that the finalmodel is calibrated. Moreover, even with robust loss functions, overfitting is an issue inpractice. With these results, we aim to explain observed robustness of common training practices, such as early stopping, to labelnoise. In addition, we aim to encourage thedevelopment of new noise-robust algorithmsthat not only preserve accuracy but that alsoensure reliability. 
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-7 av 7

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy