SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Unger Jonas) "

Sökning: WFRF:(Unger Jonas)

  • Resultat 11-20 av 112
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
11.
  • Eilertsen, Gabriel, 1984-, et al. (författare)
  • Ensembles of GANs for synthetic training data generation
  • 2021
  • Konferensbidrag (refereegranskat)abstract
    • Insufficient training data is a major bottleneck for most deep learning practices, not least in medical imaging where data is difficult to collect and publicly available datasets are scarce due to ethics and privacy. This work investigates the use of synthetic images, created by generative adversarial networks (GANs), as the only source of training data. We demonstrate that for this application, it is of great importance to make use of multiple GANs to improve the diversity of the generated data, i.e. to sufficiently cover the data distribution. While a single GAN can generate seemingly diverse image content, training on this data in most cases lead to severe over-fitting. We test the impact of ensembled GANs on synthetic 2D data as well as common image datasets (SVHN and CIFAR-10), and using both DCGANs and progressively growing GANs. As a specific use case, we focus on synthesizing digital pathology patches to provide anonymized training data.
  •  
12.
  • Eilertsen, Gabriel, et al. (författare)
  • Evaluation of Tone Mapping Operators for HDR-Video
  • 2013
  • Ingår i: Computer graphics forum (Print). - : Wiley. - 0167-7055 .- 1467-8659. ; 32:7, s. 275-284
  • Tidskriftsartikel (refereegranskat)abstract
    • Eleven tone-mapping operators intended for video processing are analyzed and evaluated with camera-captured and computer-generated high-dynamic-range content. After optimizing the parameters of the operators in a formal experiment, we inspect and rate the artifacts (flickering, ghosting, temporal color consistency) and color rendition problems (brightness, contrast and color saturation) they produce. This allows us to identify major problems and challenges that video tone-mapping needs to address. Then, we compare the tone-mapping results in a pair-wise comparison experiment to identify the operators that, on average, can be expected to perform better than the others and to assess the magnitude of differences between the best performing operators.
  •  
13.
  • Eilertsen, Gabriel, et al. (författare)
  • Evaluation of tone mapping operators for HDR video
  • 2016. - 1st
  • Ingår i: High dynamic range video. - London, United Kingdom : Academic Press. - 9780081004128 ; , s. 185-206
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • Tone mapping of HDR-video is a challenging filtering problem. It is highly important to develop a framework for evaluation and comparison of tone mapping operators. This chapter gives an overview of different approaches for how evalation of tone mapping operators can be conducted, including experimental setups, choice of input data, choice of tone mapping operators, and the importance of parameter tweaking for fair comparisons. This chapter also gives examples of previous evaluations with a focus on the results from the most recent evaluation conducted by Eilertsen et. al [reference]. This results in a classification of the currently most commonly used tone mapping operators and overview of their performance and possible artifacts.
  •  
14.
  • Eilertsen, Gabriel, et al. (författare)
  • HDR image reconstruction from a single exposure using deep CNNs
  • 2017
  • Ingår i: ACM Transactions on Graphics. - : ASSOC COMPUTING MACHINERY. - 0730-0301 .- 1557-7368. ; 36:6
  • Tidskriftsartikel (refereegranskat)abstract
    • Camera sensors can only capture a limited range of luminance simultaneously, and in order to create high dynamic range (HDR) images a set of different exposures are typically combined. In this paper we address the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure. We show that this problem is well-suited for deep learning algorithms, and propose a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values. To train the CNN we gather a large dataset of HDR images, which we augment by simulating sensor saturation for a range of cameras. To further boost robustness, we pre-train the CNN on a simulated HDR dataset created from a subset of the MIT Places database. We demonstrate that our approach can reconstruct high-resolution visually convincing HDR results in a wide range of situations, and that it generalizes well to reconstruction of images captured with arbitrary and low-end cameras that use unknown camera response functions and post-processing. Furthermore, we compare to existing methods for HDR expansion, and show high quality results also for image based lighting. Finally, we evaluate the results in a subjective experiment performed on an HDR display. This shows that the reconstructed HDR images are visually convincing, with large improvements as compared to existing methods.
  •  
15.
  • Eilertsen, Gabriel, et al. (författare)
  • How to cheat with metrics in single-image HDR reconstruction
  • 2021
  • Ingår i: 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021). - : IEEE COMPUTER SOC. - 9781665401913 ; , s. 3981-3990
  • Konferensbidrag (refereegranskat)abstract
    • Single-image high dynamic range (SI-HDR) reconstruction has recently emerged as a problem well-suited for deep learning methods. Each successive technique demonstrates an improvement over existing methods by reporting higher image quality scores. This paper, however, highlights that such improvements in objective metrics do not necessarily translate to visually superior images. The first problem is the use of disparate evaluation conditions in terms of data and metric parameters, calling for a standardized protocol to make it possible to compare between papers. The second problem, which forms the main focus of this paper, is the inherent difficulty in evaluating SI-HDR reconstructions since certain aspects of the reconstruction problem dominate objective differences, thereby introducing a bias. Here, we reproduce a typical evaluation using existing as well as simulated SI-HDR methods to demonstrate how different aspects of the problem affect objective quality metrics. Surprisingly, we found that methods that do not even reconstruct HDR information can compete with state-of-the-art deep learning methods. We show how such results are not representative of the perceived quality and that SI-HDR reconstruction needs better evaluation protocols.
  •  
16.
  • Eilertsen, Gabriel, 1984-, et al. (författare)
  • Model-invariant Weight Distribution Descriptors for Visual Exploration of Neural Networks en Masse
  • 2024
  • Ingår i: EuroVis 2024 - Short Papers. - 9783038682516
  • Konferensbidrag (refereegranskat)abstract
    • We present a neural network representation which can be used for visually analyzing the similarities and differences in a large corpus of trained neural networks. The focus is on architecture-invariant comparisons based on network weights, estimating similarities of the statistical footprints encoded by the training setups and stochastic optimization procedures. To make this possible, we propose a novel visual descriptor of neural network weights. The visual descriptor considers local weight statistics in a model-agnostic manner by encoding the distribution of weights over different model depths. We show how such a representation can extract descriptive information, is robust to different parameterizations of a model, and is applicable to different architecture specifications. The descriptor is used to create a model atlas by projecting a model library to a 2D representation, where clusters can be found based on similar weight properties. A cluster analysis strategy makes it possible to understand the weight properties of clusters and how these connect to the different datasets and hyper-parameters used to train the models.
  •  
17.
  • Eilertsen, Gabriel, et al. (författare)
  • Perceptually based parameter adjustments for video processing operations
  • 2014
  • Ingår i: ACM SIGGRAPH Talks 2014. - : ACM Press.
  • Konferensbidrag (refereegranskat)abstract
    • Extensive post processing plays a central role in modern video production pipelines. A problem in this context is that many filters and processing operators are very sensitive to parameter settings and that the filter responses in most cases are highly non-linear. Since there is no general solution for performing perceptual calibration of image and video operators automatically, it is often necessary to manually perform tweaking of multiple parameters. This is an iterative process which requires instant visual feedback of the result in both the spatial and temporal domains. Due to large filter kernels, computational complexity, high frame rate, and image resolution it is, however, often very time consuming to iteratively re-process and tweak long video sequences.We present a new method for rapidly finding the perceptual minima in high-dimensional parameter spaces of general video operators. The key idea of our algorithm is that the characteristics of an operator can be accurately described by interpolating between a small set of pre-computed parameter settings. By computing a perceptual linearization of the parameter space of a video operator, the user can explore this interpolated space to find the best set of parameters in a robust way. Since many operators are dependent on two or more parameters, we formulate this as a general optimization problem where we let the objective function be determined by the user’s image assessments. To demonstrate the usefulness of our approach we show a set of use cases (see the supplementary material) where our algorithm is applied to computationally expensive video operations.
  •  
18.
  • Eilertsen, Gabriel, et al. (författare)
  • Real-time noise-aware tone mapping
  • 2015
  • Ingår i: ACM Transactions on Graphics. - New York, NY, USA : Association for Computing Machinery (ACM). - 0730-0301 .- 1557-7368. ; 34:6, s. 198:1-198:15
  • Tidskriftsartikel (refereegranskat)abstract
    • Real-time high quality video tone mapping is needed for manyapplications, such as digital viewfinders in cameras, displayalgorithms which adapt to ambient light, in-camera processing,rendering engines for video games and video post-processing. We propose a viable solution for these applications by designing a videotone-mapping operator that controls the visibility of the noise,adapts to display and viewing environment, minimizes contrastdistortions, preserves or enhances image details, and can be run inreal-time on an incoming sequence without any preprocessing. To ourknowledge, no existing solution offers all these features. Our novelcontributions are: a fast procedure for computing local display-adaptivetone-curves which minimize contrast distortions, a fast method for detailenhancement free from ringing artifacts, and an integrated videotone-mapping solution combining all the above features.
  •  
19.
  • Eilertsen, Gabriel, et al. (författare)
  • REAL-TIME NOISE-AWARE TONE-MAPPING AND ITS USE IN LUMINANCE RETARGETING
  • 2016
  • Ingår i: 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP). - : IEEE. - 9781467399616 ; , s. 894-898
  • Konferensbidrag (refereegranskat)abstract
    • With the aid of tone-mapping operators, high dynamic range images can be mapped for reproduction on standard displays. However, for large restrictions in terms of display dynamic range and peak luminance, limitations of the human visual system have significant impact on the visual appearance. In this paper, we use components from the real-time noise-aware tone-mapping to complement an existing method for perceptual matching of image appearance under different luminance levels. The refined luminance retargeting method improves subjective quality on a display with large limitations in dynamic range, as suggested by our subjective evaluation.
  •  
20.
  • Eilertsen, Gabriel, 1984-, et al. (författare)
  • Single-frame Regularization for Temporally Stable CNNs
  • 2019
  • Ingår i: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. - 9781728132938 - 9781728132945 ; , s. 11176-11185
  • Konferensbidrag (refereegranskat)abstract
    • Convolutional neural networks (CNNs) can model complicated non-linear relations between images. However, they are notoriously sensitive to small changes in the input. Most CNNs trained to describe image-to-image mappings generate temporally unstable results when applied to video sequences, leading to flickering artifacts and other inconsistencies over time. In order to use CNNs for video material, previous methods have relied on estimating dense frame-to-frame motion information (optical flow) in the training and/or the inference phase, or by exploring recurrent learning structures. We take a different approach to the problem, posing temporal stability as a regularization of the cost function. The regularization is formulated to account for different types of motion that can occur between frames, so that temporally stable CNNs can be trained without the need for video material or expensive motion estimation. The training can be performed as a fine-tuning operation, without architectural modifications of the CNN. Our evaluation shows that the training strategy leads to large improvements in temporal smoothness. Moreover, for small datasets the regularization can help in boosting the generalization performance to a much larger extent than what is possible with naive augmentation strategies.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 11-20 av 112
Typ av publikation
konferensbidrag (57)
tidskriftsartikel (36)
doktorsavhandling (11)
annan publikation (3)
proceedings (redaktörskap) (2)
bokkapitel (2)
visa fler...
rapport (1)
visa färre...
Typ av innehåll
refereegranskat (84)
övrigt vetenskapligt/konstnärligt (28)
Författare/redaktör
Unger, Jonas, 1978- (66)
Unger, Jonas (34)
Kronander, Joel (20)
Eilertsen, Gabriel, ... (18)
Ynnerman, Anders (12)
Miandji, Ehsan (12)
visa fler...
Eilertsen, Gabriel (12)
Miandji, Ehsan, 1985 ... (10)
Tsirikoglou, Apostol ... (7)
Sintorn, Ida-Maria (6)
Larsson, Per (6)
Mantiuk, Rafal K. (6)
Mantiuk, Rafal (6)
Hajisharif, Saghi (6)
Hajisharif, Saghi, 1 ... (6)
Felsberg, Michael (5)
Lundström, Claes, 19 ... (5)
Ynnerman, Anders, 19 ... (5)
Forssén, Per-Erik (5)
Stacke, Karin, 1990- (5)
Unger, Jonas, Profes ... (5)
Guillemot, Christine (4)
Jönsson, Daniel, 198 ... (4)
Gardner, Andrew (4)
Ollila, Mark (3)
Jaroudi, Rym, 1989- (3)
Tsirikoglou, Apostol ... (3)
Wanat, Robert (3)
Emadi, Mohammad (3)
Eilertsen, Gabriel, ... (3)
Åström, Kalle (2)
Fratarcangeli, Marco ... (2)
Wenger, Andreas (2)
Fjeld, Morten, 1965 (2)
Heyden, Anders (2)
Malý, Lukáš, 1983- (2)
Vrotsou, Katerina, 1 ... (2)
Ropinski, Timo (2)
Ynnerman, Anders, Pr ... (2)
Baravdish, George, 1 ... (2)
Johansson, Tomas, 19 ... (2)
Baravdish, Gabriel, ... (2)
Forssén, Per-Erik, 1 ... (2)
Stacke, Karin (2)
Navarra, Carlo, 1982 ... (2)
Kucher, Kostiantyn, ... (2)
Banterle, Francesco (2)
Hanji, Param (2)
Ynnerman, Anders, Pr ... (2)
Per, Larsson, 1982- (2)
visa färre...
Lärosäte
Linköpings universitet (102)
Lunds universitet (6)
Uppsala universitet (4)
Chalmers tekniska högskola (3)
Stockholms universitet (1)
Karolinska Institutet (1)
visa fler...
Naturhistoriska riksmuseet (1)
visa färre...
Språk
Engelska (109)
Svenska (2)
Latin (1)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (53)
Teknik (41)
Medicin och hälsovetenskap (3)
Samhällsvetenskap (1)
Humaniora (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy